code
stringlengths
2.5k
150k
kind
stringclasses
1 value
deno EventListenerObject EventListenerObject =================== ``` interface EventListenerObject {handleEvent(evt: [Event](event)): void | Promise<void>; } ``` Methods ------- > `handleEvent(evt: [Event](event)): void | Promise<void>` deno Deno.errors.TimedOut Deno.errors.TimedOut ==================== Raised when the underlying operating system reports that an I/O operation has timed out (`ETIMEDOUT`). ``` class TimedOut extends Error { } ``` Extends ------- > `Error` deno Deno.FsWatcher Deno.FsWatcher ============== Returned by [`Deno.watchFs`](deno#watchFs). It is an async iterator yielding up system events. To stop watching the file system by calling `.close()` method. ``` interface FsWatcher extends AsyncIterable<[FsEvent](deno.fsevent)> { readonly rid: number; [[Symbol.asyncIterator]](): AsyncIterableIterator<[FsEvent](deno.fsevent)>; close(): void; return?(value?: any): Promise<IteratorResult<[FsEvent](deno.fsevent)>>; } ``` Extends ------- > `AsyncIterable<[FsEvent](deno.fsevent)>` Properties ---------- > `readonly rid: number` The resource id. Methods ------- > `[[Symbol.asyncIterator]](): AsyncIterableIterator<[FsEvent](deno.fsevent)>` > `close(): void` Stops watching the file system and closes the watcher resource. > `return?(value?: any): Promise<IteratorResult<[FsEvent](deno.fsevent)>>` Stops watching the file system and closes the watcher resource. deno Transferable Transferable ============ ``` type Transferable = ArrayBuffer | [MessagePort](messageport); ``` Type ---- > `ArrayBuffer | [MessagePort](messageport)` deno Deno.RequestEvent Deno.RequestEvent ================= ``` interface RequestEvent { readonly request: [Request](request); respondWith(r: [Response](response) | Promise<[Response](response)>): Promise<void>; } ``` Properties ---------- > `readonly request: [Request](request)` Methods ------- > `respondWith(r: [Response](response) | Promise<[Response](response)>): Promise<void>` deno Deno.readAll Deno.readAll ============ deprecated Read Reader `r` until EOF (`null`) and resolve to the content as Uint8Array`. @deprecated Use [`readAll`](https://deno.land/std/streams/conversion.ts?s=readAll) from [`std/streams/conversion.ts`](https://deno.land/std/streams/conversion.ts) instead. `Deno.readAll` will be removed in the future. ``` function readAll(r: [Reader](deno.reader)): Promise<Uint8Array>; ``` > `readAll(r: [Reader](deno.reader)): Promise<Uint8Array>` ### Parameters > `r: [Reader](deno.reader)` ### Return Type > `Promise<Uint8Array>` deno Deno.ResolveDnsOptions Deno.ResolveDnsOptions ====================== ``` interface ResolveDnsOptions {nameServer?: {ipAddr: string; port?: number; }; } ``` Properties ---------- > `nameServer?: {ipAddr: string; > port?: number; }` The name server to be used for lookups. If not specified, defaults to the system configuration e.g. `/etc/resolv.conf` on Unix. deno Deno.upgradeWebSocket Deno.upgradeWebSocket ===================== Used to upgrade an incoming HTTP request to a WebSocket. Given a request, returns a pair of WebSocket and Response. The original request must be responded to with the returned response for the websocket upgrade to be successful. ``` const conn = Deno.listen({ port: 80 }); const httpConn = Deno.serveHttp(await conn.accept()); const e = await httpConn.nextRequest(); if (e) { const { socket, response } = Deno.upgradeWebSocket(e.request); socket.onopen = () => { socket.send("Hello World!"); }; socket.onmessage = (e) => { console.log(e.data); socket.close(); }; socket.onclose = () => console.log("WebSocket has been closed."); socket.onerror = (e) => console.error("WebSocket error:", e); e.respondWith(response); } ``` If the request body is disturbed (read from) before the upgrade is completed, upgrading fails. This operation does not yet consume the request or open the websocket. This only happens once the returned response has been passed to `respondWith`. ``` function upgradeWebSocket(request: [Request](request), options?: [UpgradeWebSocketOptions](deno.upgradewebsocketoptions)): [WebSocketUpgrade](deno.websocketupgrade); ``` > `upgradeWebSocket(request: [Request](request), options?: [UpgradeWebSocketOptions](deno.upgradewebsocketoptions)): [WebSocketUpgrade](deno.websocketupgrade)` ### Parameters > `request: [Request](request)` > `options?: [UpgradeWebSocketOptions](deno.upgradewebsocketoptions) optional` ### Return Type > `[WebSocketUpgrade](deno.websocketupgrade)` deno Deno.readFile Deno.readFile ============= Reads and resolves to the entire contents of a file as an array of bytes. `TextDecoder` can be used to transform the bytes to string if required. Reading a directory returns an empty data array. ``` const decoder = new TextDecoder("utf-8"); const data = await Deno.readFile("hello.txt"); console.log(decoder.decode(data)); ``` Requires `allow-read` permission. ``` function readFile(path: string | [URL](url), options?: [ReadFileOptions](deno.readfileoptions)): Promise<Uint8Array>; ``` > `readFile(path: string | [URL](url), options?: [ReadFileOptions](deno.readfileoptions)): Promise<Uint8Array>` ### Parameters > `path: string | [URL](url)` > `options?: [ReadFileOptions](deno.readfileoptions) optional` ### Return Type > `Promise<Uint8Array>` deno GPUSupportedLimits GPUSupportedLimits ================== ``` class GPUSupportedLimits { maxBindGroups?: number; maxComputeInvocationsPerWorkgroup?: number; maxComputeWorkgroupSizeX?: number; maxComputeWorkgroupSizeY?: number; maxComputeWorkgroupSizeZ?: number; maxComputeWorkgroupsPerDimension?: number; maxComputeWorkgroupStorageSize?: number; maxDynamicStorageBuffersPerPipelineLayout?: number; maxDynamicUniformBuffersPerPipelineLayout?: number; maxInterStageShaderComponents?: number; maxSampledTexturesPerShaderStage?: number; maxSamplersPerShaderStage?: number; maxStorageBufferBindingSize?: number; maxStorageBuffersPerShaderStage?: number; maxStorageTexturesPerShaderStage?: number; maxTextureArrayLayers?: number; maxTextureDimension1D?: number; maxTextureDimension2D?: number; maxTextureDimension3D?: number; maxUniformBufferBindingSize?: number; maxUniformBuffersPerShaderStage?: number; maxVertexAttributes?: number; maxVertexBufferArrayStride?: number; maxVertexBuffers?: number; minStorageBufferOffsetAlignment?: number; minUniformBufferOffsetAlignment?: number; } ``` Properties ---------- > `maxBindGroups: number` > `maxComputeInvocationsPerWorkgroup: number` > `maxComputeWorkgroupSizeX: number` > `maxComputeWorkgroupSizeY: number` > `maxComputeWorkgroupSizeZ: number` > `maxComputeWorkgroupsPerDimension: number` > `maxComputeWorkgroupStorageSize: number` > `maxDynamicStorageBuffersPerPipelineLayout: number` > `maxDynamicUniformBuffersPerPipelineLayout: number` > `maxInterStageShaderComponents: number` > `maxSampledTexturesPerShaderStage: number` > `maxSamplersPerShaderStage: number` > `maxStorageBufferBindingSize: number` > `maxStorageBuffersPerShaderStage: number` > `maxStorageTexturesPerShaderStage: number` > `maxTextureArrayLayers: number` > `maxTextureDimension1D: number` > `maxTextureDimension2D: number` > `maxTextureDimension3D: number` > `maxUniformBufferBindingSize: number` > `maxUniformBuffersPerShaderStage: number` > `maxVertexAttributes: number` > `maxVertexBufferArrayStride: number` > `maxVertexBuffers: number` > `minStorageBufferOffsetAlignment: number` > `minUniformBufferOffsetAlignment: number` deno EcKeyAlgorithm EcKeyAlgorithm ============== ``` interface EcKeyAlgorithm extends [KeyAlgorithm](keyalgorithm) {namedCurve: [NamedCurve](namedcurve); } ``` Extends ------- > `[KeyAlgorithm](keyalgorithm)` Properties ---------- > `namedCurve: [NamedCurve](namedcurve)` deno WebAssembly.ModuleImports WebAssembly.ModuleImports ========================= ``` type ModuleImports = Record<string, [ImportValue](webassembly.importvalue)>; ``` Type ---- > `Record<string, [ImportValue](webassembly.importvalue)>` deno ReadableStreamReadResult ReadableStreamReadResult ======================== ``` type ReadableStreamReadResult<T> = [ReadableStreamReadValueResult](readablestreamreadvalueresult)<T> | [ReadableStreamReadDoneResult](readablestreamreaddoneresult)<T>; ``` Type Parameters --------------- > `T` Type ---- > `[ReadableStreamReadValueResult](readablestreamreadvalueresult)<T> | [ReadableStreamReadDoneResult](readablestreamreaddoneresult)<T>` deno setTimeout setTimeout ========== Sets a timer which executes a function once after the timer expires. Returns an id which may be used to cancel the timeout. ``` setTimeout(() => { console.log('hello'); }, 500); ``` ``` function setTimeout( cb: (...args: any[]) => void, delay?: number, ...args: any[],): number; ``` > `setTimeout(cb: (...args: any[]) => void, delay?: number, ...args: any[]): number` ### Parameters > `cb: (...args: any[]) => void` > `delay?: number optional` > `...args: any[] optional` ### Return Type > `number` deno GPUErrorFilter GPUErrorFilter ============== ``` type GPUErrorFilter = "out-of-memory" | "validation"; ``` Type ---- > `"out-of-memory" | "validation"` deno GPUCommandEncoder GPUCommandEncoder ================= ``` class GPUCommandEncoder implements [GPUObjectBase](gpuobjectbase) { label: string; beginComputePass(descriptor?: [GPUComputePassDescriptor](gpucomputepassdescriptor)): [GPUComputePassEncoder](gpucomputepassencoder); beginRenderPass(descriptor: [GPURenderPassDescriptor](gpurenderpassdescriptor)): [GPURenderPassEncoder](gpurenderpassencoder); clearBuffer( destination: [GPUBuffer](gpubuffer), destinationOffset?: number, size?: number,): undefined; copyBufferToBuffer( source: [GPUBuffer](gpubuffer), sourceOffset: number, destination: [GPUBuffer](gpubuffer), destinationOffset: number, size: number,): undefined; copyBufferToTexture( source: [GPUImageCopyBuffer](gpuimagecopybuffer), destination: [GPUImageCopyTexture](gpuimagecopytexture), copySize: [GPUExtent3D](gpuextent3d),): undefined; copyTextureToBuffer( source: [GPUImageCopyTexture](gpuimagecopytexture), destination: [GPUImageCopyBuffer](gpuimagecopybuffer), copySize: [GPUExtent3D](gpuextent3d),): undefined; copyTextureToTexture( source: [GPUImageCopyTexture](gpuimagecopytexture), destination: [GPUImageCopyTexture](gpuimagecopytexture), copySize: [GPUExtent3D](gpuextent3d),): undefined; finish(descriptor?: [GPUCommandBufferDescriptor](gpucommandbufferdescriptor)): [GPUCommandBuffer](gpucommandbuffer); insertDebugMarker(markerLabel: string): undefined; popDebugGroup(): undefined; pushDebugGroup(groupLabel: string): undefined; resolveQuerySet( querySet: [GPUQuerySet](gpuqueryset), firstQuery: number, queryCount: number, destination: [GPUBuffer](gpubuffer), destinationOffset: number,): undefined; writeTimestamp(querySet: [GPUQuerySet](gpuqueryset), queryIndex: number): undefined; } ``` Implements ---------- > `[GPUObjectBase](gpuobjectbase)` Properties ---------- > `label: string` Methods ------- > `beginComputePass(descriptor?: [GPUComputePassDescriptor](gpucomputepassdescriptor)): [GPUComputePassEncoder](gpucomputepassencoder)` > `beginRenderPass(descriptor: [GPURenderPassDescriptor](gpurenderpassdescriptor)): [GPURenderPassEncoder](gpurenderpassencoder)` > `clearBuffer(destination: [GPUBuffer](gpubuffer), destinationOffset?: number, size?: number): undefined` > `copyBufferToBuffer(source: [GPUBuffer](gpubuffer), sourceOffset: number, destination: [GPUBuffer](gpubuffer), destinationOffset: number, size: number): undefined` > `copyBufferToTexture(source: [GPUImageCopyBuffer](gpuimagecopybuffer), destination: [GPUImageCopyTexture](gpuimagecopytexture), copySize: [GPUExtent3D](gpuextent3d)): undefined` > `copyTextureToBuffer(source: [GPUImageCopyTexture](gpuimagecopytexture), destination: [GPUImageCopyBuffer](gpuimagecopybuffer), copySize: [GPUExtent3D](gpuextent3d)): undefined` > `copyTextureToTexture(source: [GPUImageCopyTexture](gpuimagecopytexture), destination: [GPUImageCopyTexture](gpuimagecopytexture), copySize: [GPUExtent3D](gpuextent3d)): undefined` > `finish(descriptor?: [GPUCommandBufferDescriptor](gpucommandbufferdescriptor)): [GPUCommandBuffer](gpucommandbuffer)` > `insertDebugMarker(markerLabel: string): undefined` > `popDebugGroup(): undefined` > `pushDebugGroup(groupLabel: string): undefined` > `resolveQuerySet(querySet: [GPUQuerySet](gpuqueryset), firstQuery: number, queryCount: number, destination: [GPUBuffer](gpubuffer), destinationOffset: number): undefined` > `writeTimestamp(querySet: [GPUQuerySet](gpuqueryset), queryIndex: number): undefined` deno ImportMeta ImportMeta ========== Deno provides extra properties on `import.meta`. These are included here to ensure that these are still available when using the Deno namespace in conjunction with other type libs, like `dom`. ``` interface ImportMeta {main: boolean; url: string; resolve(specifier: string): string; } ``` Properties ---------- > `main: boolean` A flag that indicates if the current module is the main module that was called when starting the program under Deno. ``` if (import.meta.main) { // this was loaded as the main module, maybe do some bootstrapping } ``` > `url: string` A string representation of the fully qualified module URL. When the module is loaded locally, the value will be a file URL (e.g. `file:///path/module.ts`). You can also parse the string as a URL to determine more information about how the current module was loaded. For example to determine if a module was local or not: ``` const url = new URL(import.meta.url); if (url.protocol === "file:") { console.log("this module was loaded locally"); } ``` Methods ------- > `resolve(specifier: string): string` A function that returns resolved specifier as if it would be imported using `import(specifier)`. ``` console.log(import.meta.resolve("./foo.js")); // file:///dev/foo.js ``` deno FilePropertyBag FilePropertyBag =============== ``` interface FilePropertyBag extends [BlobPropertyBag](blobpropertybag) {lastModified?: number; } ``` Extends ------- > `[BlobPropertyBag](blobpropertybag)` Properties ---------- > `lastModified?: number` deno GPUBindGroupLayout GPUBindGroupLayout ================== ``` class GPUBindGroupLayout implements [GPUObjectBase](gpuobjectbase) { label: string; } ``` Implements ---------- > `[GPUObjectBase](gpuobjectbase)` Properties ---------- > `label: string` deno PromiseRejectionEvent PromiseRejectionEvent ===================== ``` class PromiseRejectionEvent extends Event { constructor(type: string, eventInitDict?: [PromiseRejectionEventInit](promiserejectioneventinit)); readonly promise: Promise<any>; readonly reason: any; } ``` Extends ------- > `Event` Constructors ------------ > `new PromiseRejectionEvent(type: string, eventInitDict?: [PromiseRejectionEventInit](promiserejectioneventinit))` Properties ---------- > `promise: Promise<any>` > `reason: any` deno WebAssembly.ImportExportKind WebAssembly.ImportExportKind ============================ ``` type ImportExportKind = | "function" | "global" | "memory" | "table"; ``` Type ---- > `"function" | "global" | "memory" | "table"` deno Response Response ======== This Fetch API interface represents the response to a request. ``` class Response implements [Body](body) { constructor(body?: [BodyInit](bodyinit) | null, init?: [ResponseInit](responseinit)); readonly body: [ReadableStream](readablestream)<Uint8Array> | null; readonly bodyUsed: boolean; readonly headers: [Headers](headers); readonly ok: boolean; readonly redirected: boolean; readonly status: number; readonly statusText: string; readonly trailer: Promise<[Headers](headers)>; readonly type: [ResponseType](responsetype); readonly url: string; arrayBuffer(): Promise<ArrayBuffer>; blob(): Promise<[Blob](blob)>; clone(): [Response](response); formData(): Promise<[FormData](formdata)>; json(): Promise<any>; text(): Promise<string>; static error(): [Response](response); static json(data: unknown, init?: [ResponseInit](responseinit)): [Response](response); static redirect(url: string | [URL](url), status?: number): [Response](response); } ``` Implements ---------- > `[Body](body)` Constructors ------------ > `new Response(body?: [BodyInit](bodyinit) | null, init?: [ResponseInit](responseinit))` Properties ---------- > `body: [ReadableStream](readablestream)<Uint8Array> | null` A simple getter used to expose a `ReadableStream` of the body contents. > `bodyUsed: boolean` Stores a `Boolean` that declares whether the body has been used in a response yet. > `headers: [Headers](headers)` > `ok: boolean` > `redirected: boolean` > `status: number` > `statusText: string` > `trailer: Promise<[Headers](headers)>` > `type: [ResponseType](responsetype)` > `url: string` Methods ------- > `arrayBuffer(): Promise<ArrayBuffer>` Takes a `Response` stream and reads it to completion. It returns a promise that resolves with an `ArrayBuffer`. > `blob(): Promise<[Blob](blob)>` Takes a `Response` stream and reads it to completion. It returns a promise that resolves with a `Blob`. > `clone(): [Response](response)` > `formData(): Promise<[FormData](formdata)>` Takes a `Response` stream and reads it to completion. It returns a promise that resolves with a `FormData` object. > `json(): Promise<any>` Takes a `Response` stream and reads it to completion. It returns a promise that resolves with the result of parsing the body text as JSON. > `text(): Promise<string>` Takes a `Response` stream and reads it to completion. It returns a promise that resolves with a `USVString` (text). Static Methods -------------- > `error(): [Response](response)` > `json(data: unknown, init?: [ResponseInit](responseinit)): [Response](response)` > `redirect(url: string | [URL](url), status?: number): [Response](response)` deno Deno.errors.InvalidData Deno.errors.InvalidData ======================= Raised when an operation to returns data that is invalid for the operation being performed. ``` class InvalidData extends Error { } ``` Extends ------- > `Error` deno GPU GPU === ``` class GPU { requestAdapter(options?: [GPURequestAdapterOptions](gpurequestadapteroptions)): Promise<[GPUAdapter](gpuadapter) | null>;} ``` Methods ------- > `requestAdapter(options?: [GPURequestAdapterOptions](gpurequestadapteroptions)): Promise<[GPUAdapter](gpuadapter) | null>` deno Deno.writeFileSync Deno.writeFileSync ================== Synchronously write `data` to the given `path`, by default creating a new file if needed, else overwriting. ``` const encoder = new TextEncoder(); const data = encoder.encode("Hello world\n"); Deno.writeFileSync("hello1.txt", data); // overwrite "hello1.txt" or create it Deno.writeFileSync("hello2.txt", data, { create: false }); // only works if "hello2.txt" exists Deno.writeFileSync("hello3.txt", data, { mode: 0o777 }); // set permissions on new file Deno.writeFileSync("hello4.txt", data, { append: true }); // add data to the end of the file ``` Requires `allow-write` permission, and `allow-read` if `options.create` is `false`. ``` function writeFileSync( path: string | [URL](url), data: Uint8Array, options?: [WriteFileOptions](deno.writefileoptions),): void; ``` > `writeFileSync(path: string | [URL](url), data: Uint8Array, options?: [WriteFileOptions](deno.writefileoptions)): void` ### Parameters > `path: string | [URL](url)` > `data: Uint8Array` > `options?: [WriteFileOptions](deno.writefileoptions) optional` ### Return Type > `void`
programming_docs
deno GPULoadOp GPULoadOp ========= ``` type GPULoadOp = "load" | "clear"; ``` Type ---- > `"load" | "clear"` deno Deno.errors.ConnectionAborted Deno.errors.ConnectionAborted ============================= Raised when the underlying operating system reports an `ECONNABORTED` error. ``` class ConnectionAborted extends Error { } ``` Extends ------- > `Error` deno GPUPipelineLayoutDescriptor GPUPipelineLayoutDescriptor =========================== ``` interface GPUPipelineLayoutDescriptor extends [GPUObjectDescriptorBase](gpuobjectdescriptorbase) {bindGroupLayouts: [GPUBindGroupLayout](gpubindgrouplayout)[]; } ``` Extends ------- > `[GPUObjectDescriptorBase](gpuobjectdescriptorbase)` Properties ---------- > `bindGroupLayouts: [GPUBindGroupLayout](gpubindgrouplayout)[]` deno WebAssembly.Exports WebAssembly.Exports =================== ``` type Exports = Record<string, [ExportValue](webassembly.exportvalue)>; ``` Type ---- > `Record<string, [ExportValue](webassembly.exportvalue)>` deno clearInterval clearInterval ============= Cancels a timed, repeating action which was previously started by a call to `setInterval()` ``` const id = setInterval(() => {console.log('hello');}, 500); // ... clearInterval(id); ``` ``` function clearInterval(id?: number): void; ``` > `clearInterval(id?: number): void` ### Parameters > `id?: number optional` ### Return Type > `void` deno HmacKeyAlgorithm HmacKeyAlgorithm ================ ``` interface HmacKeyAlgorithm extends [KeyAlgorithm](keyalgorithm) {hash: [KeyAlgorithm](keyalgorithm); length: number; } ``` Extends ------- > `[KeyAlgorithm](keyalgorithm)` Properties ---------- > `hash: [KeyAlgorithm](keyalgorithm)` > `length: number` deno TextEncoderStream TextEncoderStream ================= ``` interface TextEncoderStream { readonly [[Symbol.toStringTag]]: string; readonly encoding: "utf-8"; readonly readable: [ReadableStream](readablestream)<Uint8Array>; readonly writable: [WritableStream](writablestream)<string>; } ``` ``` var TextEncoderStream: {prototype: [TextEncoderStream](textencoderstream); new (): [TextEncoderStream](textencoderstream); }; ``` Properties ---------- > `readonly [[Symbol.toStringTag]]: string` > `readonly encoding: "utf-8"` Returns "utf-8". > `readonly readable: [ReadableStream](readablestream)<Uint8Array>` > `readonly writable: [WritableStream](writablestream)<string>` deno Deno.RemoveOptions Deno.RemoveOptions ================== Options which can be set when using [`Deno.remove`](deno#remove) and [`Deno.removeSync`](deno#removeSync). ``` interface RemoveOptions {recursive?: boolean; } ``` Properties ---------- > `recursive?: boolean` Defaults to `false`. If set to `true`, path will be removed even if it's a non-empty directory. deno GPUSamplerBindingLayout GPUSamplerBindingLayout ======================= ``` interface GPUSamplerBindingLayout {type?: [GPUSamplerBindingType](gpusamplerbindingtype); } ``` Properties ---------- > `type?: [GPUSamplerBindingType](gpusamplerbindingtype)` deno Deno.writeTextFileSync Deno.writeTextFileSync ====================== Synchronously write string `data` to the given `path`, by default creating a new file if needed, else overwriting. ``` Deno.writeTextFileSync("hello1.txt", "Hello world\n"); // overwrite "hello1.txt" or create it ``` Requires `allow-write` permission, and `allow-read` if `options.create` is `false`. ``` function writeTextFileSync( path: string | [URL](url), data: string, options?: [WriteFileOptions](deno.writefileoptions),): void; ``` > `writeTextFileSync(path: string | [URL](url), data: string, options?: [WriteFileOptions](deno.writefileoptions)): void` ### Parameters > `path: string | [URL](url)` > `data: string` > `options?: [WriteFileOptions](deno.writefileoptions) optional` ### Return Type > `void` deno GPUTextureDimension GPUTextureDimension =================== ``` type GPUTextureDimension = "1d" | "2d" | "3d"; ``` Type ---- > `"1d" | "2d" | "3d"` deno GPUPipelineLayout GPUPipelineLayout ================= ``` class GPUPipelineLayout implements [GPUObjectBase](gpuobjectbase) { label: string; } ``` Implements ---------- > `[GPUObjectBase](gpuobjectbase)` Properties ---------- > `label: string` deno GPUBufferDescriptor GPUBufferDescriptor =================== ``` interface GPUBufferDescriptor extends [GPUObjectDescriptorBase](gpuobjectdescriptorbase) {mappedAtCreation?: boolean; size: number; usage: [GPUBufferUsageFlags](gpubufferusageflags); } ``` Extends ------- > `[GPUObjectDescriptorBase](gpuobjectdescriptorbase)` Properties ---------- > `mappedAtCreation?: boolean` > `size: number` > `usage: [GPUBufferUsageFlags](gpubufferusageflags)` deno Deno.customInspect Deno.customInspect ================== deprecated A symbol which can be used as a key for a custom method which will be called when `Deno.inspect()` is called, or when the object is logged to the console. @deprecated This symbol is deprecated since 1.9. Use `Symbol.for("Deno.customInspect")` instead. ``` const customInspect: unique symbol; ``` deno Deno.test Deno.test ========= Register a test which will be run when `deno test` is used on the command line and the containing module looks like a test module. `fn` can be async if required. ``` import { assertEquals } from "https://deno.land/std/testing/asserts.ts"; Deno.test({ name: "example test", fn() { assertEquals("world", "world"); }, }); Deno.test({ name: "example ignored test", ignore: Deno.build.os === "windows", fn() { // This test is ignored only on Windows machines }, }); Deno.test({ name: "example async test", async fn() { const decoder = new TextDecoder("utf-8"); const data = await Deno.readFile("hello\_world.txt"); assertEquals(decoder.decode(data), "Hello world"); } }); ``` ``` function test(t: [TestDefinition](deno.testdefinition)): void; function test(name: string, fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>): void; function test(fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>): void; function test( name: string, options: Omit<[TestDefinition](deno.testdefinition), "fn" | "name">, fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>,): void; function test(options: Omit<[TestDefinition](deno.testdefinition), "fn">, fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>): void; function test(options: Omit<[TestDefinition](deno.testdefinition), "fn" | "name">, fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>): void; ``` > `test(t: [TestDefinition](deno.testdefinition)): void` Register a test which will be run when `deno test` is used on the command line and the containing module looks like a test module. `fn` can be async if required. ``` import { assertEquals } from "https://deno.land/std/testing/asserts.ts"; Deno.test({ name: "example test", fn() { assertEquals("world", "world"); }, }); Deno.test({ name: "example ignored test", ignore: Deno.build.os === "windows", fn() { // This test is ignored only on Windows machines }, }); Deno.test({ name: "example async test", async fn() { const decoder = new TextDecoder("utf-8"); const data = await Deno.readFile("hello\_world.txt"); assertEquals(decoder.decode(data), "Hello world"); } }); ``` ### Parameters > `t: [TestDefinition](deno.testdefinition)` ### Return Type > `void` > `test(name: string, fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>): void` Register a test which will be run when `deno test` is used on the command line and the containing module looks like a test module. `fn` can be async if required. ``` import { assertEquals } from "https://deno.land/std/testing/asserts.ts"; Deno.test("My test description", () => { assertEquals("hello", "hello"); }); Deno.test("My async test description", async () => { const decoder = new TextDecoder("utf-8"); const data = await Deno.readFile("hello\_world.txt"); assertEquals(decoder.decode(data), "Hello world"); }); ``` ### Parameters > `name: string` > `fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>` ### Return Type > `void` > `test(fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>): void` Register a test which will be run when `deno test` is used on the command line and the containing module looks like a test module. `fn` can be async if required. Declared function must have a name. ``` import { assertEquals } from "https://deno.land/std/testing/asserts.ts"; Deno.test(function myTestName() { assertEquals("hello", "hello"); }); Deno.test(async function myOtherTestName() { const decoder = new TextDecoder("utf-8"); const data = await Deno.readFile("hello\_world.txt"); assertEquals(decoder.decode(data), "Hello world"); }); ``` ### Parameters > `fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>` ### Return Type > `void` > `test(name: string, options: Omit<[TestDefinition](deno.testdefinition), "fn" | "name">, fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>): void` Register a test which will be run when `deno test` is used on the command line and the containing module looks like a test module. `fn` can be async if required. ``` import {assert, fail, assertEquals} from "https://deno.land/std/testing/asserts.ts"; Deno.test("My test description", { permissions: { read: true } }, (): void => { assertEquals("hello", "hello"); }); Deno.test("My async test description", { permissions: { read: false } }, async (): Promise<void> => { const decoder = new TextDecoder("utf-8"); const data = await Deno.readFile("hello\_world.txt"); assertEquals(decoder.decode(data), "Hello world"); }); ``` ### Parameters > `name: string` > `options: Omit<[TestDefinition](deno.testdefinition), "fn" | "name">` > `fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>` ### Return Type > `void` > `test(options: Omit<[TestDefinition](deno.testdefinition), "fn">, fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>): void` Register a test which will be run when `deno test` is used on the command line and the containing module looks like a test module. `fn` can be async if required. ``` import { assertEquals } from "https://deno.land/std/testing/asserts.ts"; Deno.test( { name: "My test description", permissions: { read: true }, }, () => { assertEquals("hello", "hello"); }, ); Deno.test( { name: "My async test description", permissions: { read: false }, }, async () => { const decoder = new TextDecoder("utf-8"); const data = await Deno.readFile("hello\_world.txt"); assertEquals(decoder.decode(data), "Hello world"); }, ); ``` ### Parameters > `options: Omit<[TestDefinition](deno.testdefinition), "fn">` > `fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>` ### Return Type > `void` > `test(options: Omit<[TestDefinition](deno.testdefinition), "fn" | "name">, fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>): void` Register a test which will be run when `deno test` is used on the command line and the containing module looks like a test module. `fn` can be async if required. Declared function must have a name. ``` import { assertEquals } from "https://deno.land/std/testing/asserts.ts"; Deno.test( { permissions: { read: true } }, function myTestName() { assertEquals("hello", "hello"); }, ); Deno.test( { permissions: { read: false } }, async function myOtherTestName() { const decoder = new TextDecoder("utf-8"); const data = await Deno.readFile("hello\_world.txt"); assertEquals(decoder.decode(data), "Hello world"); }, ); ``` ### Parameters > `options: Omit<[TestDefinition](deno.testdefinition), "fn" | "name">` > `fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>` ### Return Type > `void` deno Deno.errors.PermissionDenied Deno.errors.PermissionDenied ============================ Raised when the underlying operating system indicates the current user which the Deno process is running under does not have the appropriate permissions to a file or resource, or the user *did not* provide required `--allow-*` flag. ``` class PermissionDenied extends Error { } ``` Extends ------- > `Error` deno Deno.refTimer Deno.refTimer ============= Make the timer of the given `id` block the event loop from finishing. ``` function refTimer(id: number): void; ``` > `refTimer(id: number): void` ### Parameters > `id: number` ### Return Type > `void` deno GPUShaderStageFlags GPUShaderStageFlags =================== ``` type GPUShaderStageFlags = number; ``` Type ---- > `number` deno ReadableStreamBYOBReadDoneResult ReadableStreamBYOBReadDoneResult ================================ ``` interface ReadableStreamBYOBReadDoneResult <V extends ArrayBufferView> {done: true; value?: V; } ``` Type Parameters --------------- > `V extends ArrayBufferView` Properties ---------- > `done: true` > `value?: V` deno BlobPart BlobPart ======== ``` type BlobPart = [BufferSource](buffersource) | [Blob](blob) | string; ``` Type ---- > `[BufferSource](buffersource) | [Blob](blob) | string` deno GPUShaderModule GPUShaderModule =============== ``` class GPUShaderModule implements [GPUObjectBase](gpuobjectbase) { label: string; compilationInfo(): Promise<[GPUCompilationInfo](gpucompilationinfo)>; } ``` Implements ---------- > `[GPUObjectBase](gpuobjectbase)` Properties ---------- > `label: string` Methods ------- > `compilationInfo(): Promise<[GPUCompilationInfo](gpucompilationinfo)>` deno ReadableStream ReadableStream ============== This Streams API interface represents a readable stream of byte data. The Fetch API offers a concrete instance of a ReadableStream through the body property of a Response object. ``` interface ReadableStream <R = any> { readonly locked: boolean; [[Symbol.asyncIterator]](options?: {preventCancel?: boolean; }): AsyncIterableIterator<R>; cancel(reason?: any): Promise<void>; getReader(options: {mode: "byob"; }): [ReadableStreamBYOBReader](readablestreambyobreader); getReader(options?: {mode?: undefined; }): [ReadableStreamDefaultReader](readablestreamdefaultreader)<R>; pipeThrough<T>({ writable, readable }: {writable: [WritableStream](writablestream)<R>; readable: [ReadableStream](readablestream)<T>; }, options?: [PipeOptions](pipeoptions)): [ReadableStream](readablestream)<T>; pipeTo(dest: [WritableStream](writablestream)<R>, options?: [PipeOptions](pipeoptions)): Promise<void>; tee(): [[ReadableStream](readablestream)<R>, [ReadableStream](readablestream)<R>]; } ``` ``` var ReadableStream: {prototype: [ReadableStream](readablestream); new (underlyingSource: [UnderlyingByteSource](underlyingbytesource), strategy?: {highWaterMark?: number; size?: undefined; }): [ReadableStream](readablestream)<Uint8Array>; new <R = any>(underlyingSource?: [UnderlyingSource](underlyingsource)<R>, strategy?: [QueuingStrategy](queuingstrategy)<R>): [ReadableStream](readablestream)<R>; }; ``` Type Parameters --------------- > `R = any` Properties ---------- > `readonly locked: boolean` Methods ------- > `[[Symbol.asyncIterator]](options?: {preventCancel?: boolean; }): AsyncIterableIterator<R>` > `cancel(reason?: any): Promise<void>` > `getReader(options: {mode: "byob"; }): [ReadableStreamBYOBReader](readablestreambyobreader)` > `getReader(options?: {mode?: undefined; }): [ReadableStreamDefaultReader](readablestreamdefaultreader)<R>` > `pipeThrough<T>({ writable, readable }: {writable: [WritableStream](writablestream)<R>; > readable: [ReadableStream](readablestream)<T>; }, options?: [PipeOptions](pipeoptions)): [ReadableStream](readablestream)<T>` > `pipeTo(dest: [WritableStream](writablestream)<R>, options?: [PipeOptions](pipeoptions)): Promise<void>` > `tee(): [[ReadableStream](readablestream)<R>, [ReadableStream](readablestream)<R>]` deno clearTimeout clearTimeout ============ Cancels a scheduled action initiated by `setTimeout()` ``` const id = setTimeout(() => {console.log('hello');}, 500); // ... clearTimeout(id); ``` ``` function clearTimeout(id?: number): void; ``` > `clearTimeout(id?: number): void` ### Parameters > `id?: number optional` ### Return Type > `void` deno GPUSamplerDescriptor GPUSamplerDescriptor ==================== ``` interface GPUSamplerDescriptor extends [GPUObjectDescriptorBase](gpuobjectdescriptorbase) {addressModeU?: [GPUAddressMode](gpuaddressmode); addressModeV?: [GPUAddressMode](gpuaddressmode); addressModeW?: [GPUAddressMode](gpuaddressmode); compare?: [GPUCompareFunction](gpucomparefunction); lodMaxClamp?: number; lodMinClamp?: number; magFilter?: [GPUFilterMode](gpufiltermode); maxAnisotropy?: number; minFilter?: [GPUFilterMode](gpufiltermode); mipmapFilter?: [GPUMipmapFilterMode](gpumipmapfiltermode); } ``` Extends ------- > `[GPUObjectDescriptorBase](gpuobjectdescriptorbase)` Properties ---------- > `addressModeU?: [GPUAddressMode](gpuaddressmode)` > `addressModeV?: [GPUAddressMode](gpuaddressmode)` > `addressModeW?: [GPUAddressMode](gpuaddressmode)` > `compare?: [GPUCompareFunction](gpucomparefunction)` > `lodMaxClamp?: number` > `lodMinClamp?: number` > `magFilter?: [GPUFilterMode](gpufiltermode)` > `maxAnisotropy?: number` > `minFilter?: [GPUFilterMode](gpufiltermode)` > `mipmapFilter?: [GPUMipmapFilterMode](gpumipmapfiltermode)` deno Deno.ftruncateSync Deno.ftruncateSync ================== Synchronously truncates or extends the specified file stream, to reach the specified `len`. If `len` is not specified then the entire file contents are truncated as if len was set to 0. if the file previously was larger than this new length, the extra data is lost. if the file previously was shorter, it is extended, and the extended part reads as null bytes ('\0'). ``` // truncate the entire file const file = Deno.openSync("my\_file.txt", { read: true, write: true, truncate: true, create: true }); Deno.ftruncateSync(file.rid); ``` ``` // truncate part of the file const file = Deno.openSync("my\_file.txt", { read: true, write: true, create: true }); Deno.writeSync(file.rid, new TextEncoder().encode("Hello World")); Deno.ftruncateSync(file.rid, 7); Deno.seekSync(file.rid, 0, Deno.SeekMode.Start); const data = new Uint8Array(32); Deno.readSync(file.rid, data); console.log(new TextDecoder().decode(data)); // Hello W ``` ``` function ftruncateSync(rid: number, len?: number): void; ``` > `ftruncateSync(rid: number, len?: number): void` ### Parameters > `rid: number` > `len?: number optional` ### Return Type > `void` deno WebAssembly.Instance WebAssembly.Instance ==================== A `WebAssembly.Instance` object is a stateful, executable instance of a `WebAssembly.Module`. Instance objects contain all the Exported WebAssembly functions that allow calling into WebAssembly code from JavaScript. [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WebAssembly/Instance) ``` class Instance { constructor(module: [Module](webassembly.module), importObject?: [Imports](webassembly.imports)); readonly exports: [Exports](webassembly.exports); } ``` Constructors ------------ > `new Instance(module: [Module](webassembly.module), importObject?: [Imports](webassembly.imports))` Creates a new Instance object. Properties ---------- > `exports: [Exports](webassembly.exports)` Returns an object containing as its members all the functions exported from the WebAssembly module instance, to allow them to be accessed and used by JavaScript. Read-only.
programming_docs
deno WebAssembly.ModuleImportDescriptor WebAssembly.ModuleImportDescriptor ================================== A `ModuleImportDescriptor` is the description of a declared import in a `WebAssembly.Module`. ``` interface ModuleImportDescriptor {kind: [ImportExportKind](webassembly.importexportkind); module: string; name: string; } ``` Properties ---------- > `kind: [ImportExportKind](webassembly.importexportkind)` > `module: string` > `name: string` deno HkdfParams HkdfParams ========== ``` interface HkdfParams extends [Algorithm](algorithm) {hash: [HashAlgorithmIdentifier](hashalgorithmidentifier); info: [BufferSource](buffersource); salt: [BufferSource](buffersource); } ``` Extends ------- > `[Algorithm](algorithm)` Properties ---------- > `hash: [HashAlgorithmIdentifier](hashalgorithmidentifier)` > `info: [BufferSource](buffersource)` > `salt: [BufferSource](buffersource)` deno WritableStreamDefaultController WritableStreamDefaultController =============================== This Streams API interface represents a controller allowing control of a WritableStream's state. When constructing a WritableStream, the underlying sink is given a corresponding WritableStreamDefaultController instance to manipulate. ``` interface WritableStreamDefaultController {signal: [AbortSignal](abortsignal); error(error?: any): void; } ``` ``` var WritableStreamDefaultController: [WritableStreamDefaultController](writablestreamdefaultcontroller); ``` Properties ---------- > `signal: [AbortSignal](abortsignal)` Methods ------- > `error(error?: any): void` deno GPUMultisampleState GPUMultisampleState =================== ``` interface GPUMultisampleState {alphaToCoverageEnabled?: boolean; count?: number; mask?: number; } ``` Properties ---------- > `alphaToCoverageEnabled?: boolean` > `count?: number` > `mask?: number` deno WorkerOptions WorkerOptions ============= ``` interface WorkerOptions {name?: string; type?: "classic" | "module"; } ``` Properties ---------- > `name?: string` > `type?: "classic" | "module"` deno BodyInit BodyInit ======== ``` type BodyInit = | [Blob](blob) | [BufferSource](buffersource) | [FormData](formdata) | [URLSearchParams](urlsearchparams) | [ReadableStream](readablestream)<Uint8Array> | string; ``` Type ---- > `[Blob](blob) | [BufferSource](buffersource) | [FormData](formdata) | [URLSearchParams](urlsearchparams) | [ReadableStream](readablestream)<Uint8Array> | string` deno GPUSamplerBindingType GPUSamplerBindingType ===================== ``` type GPUSamplerBindingType = "filtering" | "non-filtering" | "comparison"; ``` Type ---- > `"filtering" | "non-filtering" | "comparison"` deno AbortSignal AbortSignal =========== A signal object that allows you to communicate with a DOM request (such as a Fetch) and abort it if required via an AbortController object. ``` interface AbortSignal extends [EventTarget](eventtarget) { readonly aborted: boolean; onabort: ((this: [AbortSignal](abortsignal), ev: [Event](event)) => any) | null; readonly reason: any; addEventListener<K extends keyof [AbortSignalEventMap](abortsignaleventmap)>( type: K, listener: (this: [AbortSignal](abortsignal), ev: [AbortSignalEventMap](abortsignaleventmap)[K]) => any, options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void; addEventListener( type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void; removeEventListener<K extends keyof [AbortSignalEventMap](abortsignaleventmap)>( type: K, listener: (this: [AbortSignal](abortsignal), ev: [AbortSignalEventMap](abortsignaleventmap)[K]) => any, options?: boolean | [EventListenerOptions](eventlisteneroptions),): void; removeEventListener( type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [EventListenerOptions](eventlisteneroptions),): void; throwIfAborted(): void; } ``` ``` var AbortSignal: {prototype: [AbortSignal](abortsignal); new (): [AbortSignal](abortsignal); abort(reason?: any): [AbortSignal](abortsignal); timeout(milliseconds: number): [AbortSignal](abortsignal); }; ``` Extends ------- > `[EventTarget](eventtarget)` Properties ---------- > `readonly aborted: boolean` Returns true if this AbortSignal's AbortController has signaled to abort, and false otherwise. > `onabort: ((this: [AbortSignal](abortsignal), ev: [Event](event)) => any) | null` > `readonly reason: any` Methods ------- > `addEventListener<K extends keyof [AbortSignalEventMap](abortsignaleventmap)>( > type: K, > > listener: (this: [AbortSignal](abortsignal), ev: [AbortSignalEventMap](abortsignaleventmap)[K]) => any, > > options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void` > `addEventListener( > type: string, > > listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), > > options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void` > `removeEventListener<K extends keyof [AbortSignalEventMap](abortsignaleventmap)>( > type: K, > > listener: (this: [AbortSignal](abortsignal), ev: [AbortSignalEventMap](abortsignaleventmap)[K]) => any, > > options?: boolean | [EventListenerOptions](eventlisteneroptions),): void` > `removeEventListener( > type: string, > > listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), > > options?: boolean | [EventListenerOptions](eventlisteneroptions),): void` > `throwIfAborted(): void` Throws this AbortSignal's abort reason, if its AbortController has signaled to abort; otherwise, does nothing. deno btoa btoa ==== Creates a base-64 ASCII encoded string from the input string. ``` console.log(btoa("hello world")); // outputs "aGVsbG8gd29ybGQ=" ``` ``` function btoa(s: string): string; ``` > `btoa(s: string): string` ### Parameters > `s: string` ### Return Type > `string` deno Deno.seek Deno.seek ========= Seek a resource ID (`rid`) to the given `offset` under mode given by `whence`. The call resolves to the new position within the resource (bytes from the start). ``` // Given file.rid pointing to file with "Hello world", which is 11 bytes long: const file = await Deno.open( "hello.txt", { read: true, write: true, truncate: true, create: true }, ); await Deno.write(file.rid, new TextEncoder().encode("Hello world")); // advance cursor 6 bytes const cursorPosition = await Deno.seek(file.rid, 6, Deno.SeekMode.Start); console.log(cursorPosition); // 6 const buf = new Uint8Array(100); await file.read(buf); console.log(new TextDecoder().decode(buf)); // "world" file.close(); ``` The seek modes work as follows: ``` // Given file.rid pointing to file with "Hello world", which is 11 bytes long: const file = await Deno.open( "hello.txt", { read: true, write: true, truncate: true, create: true }, ); await Deno.write(file.rid, new TextEncoder().encode("Hello world")); // Seek 6 bytes from the start of the file console.log(await Deno.seek(file.rid, 6, Deno.SeekMode.Start)); // "6" // Seek 2 more bytes from the current position console.log(await Deno.seek(file.rid, 2, Deno.SeekMode.Current)); // "8" // Seek backwards 2 bytes from the end of the file console.log(await Deno.seek(file.rid, -2, Deno.SeekMode.End)); // "9" (e.g. 11-2) file.close(); ``` ``` function seek( rid: number, offset: number, whence: [SeekMode](deno.seekmode),): Promise<number>; ``` > `seek(rid: number, offset: number, whence: [SeekMode](deno.seekmode)): Promise<number>` ### Parameters > `rid: number` > `offset: number` > `whence: [SeekMode](deno.seekmode)` ### Return Type > `Promise<number>` deno WebAssembly.validate WebAssembly.validate ==================== The `WebAssembly.validate()` function validates a given typed array of WebAssembly binary code, returning whether the bytes form a valid wasm module (`true`) or not (`false`). [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WebAssembly/validate) ``` function validate(bytes: [BufferSource](buffersource)): boolean; ``` > `validate(bytes: [BufferSource](buffersource)): boolean` ### Parameters > `bytes: [BufferSource](buffersource)` ### Return Type > `boolean` deno Deno.args Deno.args ========= Returns the script arguments to the program. If for example we run a program: deno run --allow-read <https://deno.land/std/examples/cat.ts> /etc/passwd Then `Deno.args` will contain: [ "/etc/passwd" ] ``` const args: string[]; ``` deno PostMessageOptions PostMessageOptions ================== deprecated This type has been renamed to StructuredSerializeOptions. Use that type for new code. @deprecated use `StructuredSerializeOptions` instead. ``` type PostMessageOptions = [StructuredSerializeOptions](structuredserializeoptions); ``` Type ---- > `[StructuredSerializeOptions](structuredserializeoptions)` deno WebAssembly WebAssembly =========== Classes ------- | | | | --- | --- | | [WebAssembly.CompileError](webassembly.compileerror) | The `WebAssembly.CompileError` object indicates an error during WebAssembly decoding or validation. | | [WebAssembly.Global](webassembly.global) | A `WebAssembly.Global` object represents a global variable instance, accessible from both JavaScript and importable/exportable across one or more `WebAssembly.Module` instances. This allows dynamic linking of multiple modules. | | [WebAssembly.Instance](webassembly.instance) | A `WebAssembly.Instance` object is a stateful, executable instance of a `WebAssembly.Module`. Instance objects contain all the Exported WebAssembly functions that allow calling into WebAssembly code from JavaScript. | | [WebAssembly.LinkError](webassembly.linkerror) | The `WebAssembly.LinkError` object indicates an error during module instantiation (besides traps from the start function). | | [WebAssembly.Memory](webassembly.memory) | The `WebAssembly.Memory` object is a resizable `ArrayBuffer` or `SharedArrayBuffer` that holds the raw bytes of memory accessed by a WebAssembly Instance. | | [WebAssembly.Module](webassembly.module) | A `WebAssembly.Module` object contains stateless WebAssembly code that has already been compiled by the browser — this can be efficiently shared with Workers, and instantiated multiple times. | | [WebAssembly.RuntimeError](webassembly.runtimeerror) | The `WebAssembly.RuntimeError` object is the error type that is thrown whenever WebAssembly specifies a trap. | | [WebAssembly.Table](webassembly.table) | The `WebAssembly.Table()` object is a JavaScript wrapper object — an array-like structure representing a WebAssembly Table, which stores function references. A table created by JavaScript or in WebAssembly code will be accessible and mutable from both JavaScript and WebAssembly. | Functions --------- | | | | --- | --- | | [WebAssembly.compile](webassembly.compile) | The `WebAssembly.compile()` function compiles WebAssembly binary code into a `WebAssembly.Module` object. This function is useful if it is necessary to compile a module before it can be instantiated (otherwise, the `WebAssembly.instantiate()` function should be used). | | [WebAssembly.compileStreaming](webassembly.compilestreaming) | The `WebAssembly.compileStreaming()` function compiles a `WebAssembly.Module` directly from a streamed underlying source. This function is useful if it is necessary to a compile a module before it can be instantiated (otherwise, the `WebAssembly.instantiateStreaming()` function should be used). | | [WebAssembly.instantiate](webassembly.instantiate) | The WebAssembly.instantiate() function allows you to compile and instantiate WebAssembly code. | | [WebAssembly.instantiateStreaming](webassembly.instantiatestreaming) | The `WebAssembly.instantiateStreaming()` function compiles and instantiates a WebAssembly module directly from a streamed underlying source. This is the most efficient, optimized way to load wasm code. | | [WebAssembly.validate](webassembly.validate) | The `WebAssembly.validate()` function validates a given typed array of WebAssembly binary code, returning whether the bytes form a valid wasm module (`true`) or not (`false`). | Interfaces ---------- | | | | --- | --- | | [WebAssembly.GlobalDescriptor](webassembly.globaldescriptor) | The `GlobalDescriptor` describes the options you can pass to `new WebAssembly.Global()`. | | [WebAssembly.MemoryDescriptor](webassembly.memorydescriptor) | The `MemoryDescriptor` describes the options you can pass to `new WebAssembly.Memory()`. | | [WebAssembly.ModuleExportDescriptor](webassembly.moduleexportdescriptor) | A `ModuleExportDescriptor` is the description of a declared export in a `WebAssembly.Module`. | | [WebAssembly.ModuleImportDescriptor](webassembly.moduleimportdescriptor) | A `ModuleImportDescriptor` is the description of a declared import in a `WebAssembly.Module`. | | [WebAssembly.TableDescriptor](webassembly.tabledescriptor) | The `TableDescriptor` describes the options you can pass to `new WebAssembly.Table()`. | | [WebAssembly.WebAssemblyInstantiatedSource](webassembly.webassemblyinstantiatedsource) | The value returned from `WebAssembly.instantiate`. | Type Aliases ------------ | | | | --- | --- | | [WebAssembly.Exports](webassembly.exports) | | | [WebAssembly.ExportValue](webassembly.exportvalue) | | | [WebAssembly.ImportExportKind](webassembly.importexportkind) | | | [WebAssembly.Imports](webassembly.imports) | | | [WebAssembly.ImportValue](webassembly.importvalue) | | | [WebAssembly.ModuleImports](webassembly.moduleimports) | | | [WebAssembly.TableKind](webassembly.tablekind) | | | [WebAssembly.ValueType](webassembly.valuetype) | | deno Deno.OpenOptions Deno.OpenOptions ================ Options which can be set when doing [`Deno.open`](deno#open) and [`Deno.openSync`](deno#openSync). ``` interface OpenOptions {append?: boolean; create?: boolean; createNew?: boolean; mode?: number; read?: boolean; truncate?: boolean; write?: boolean; } ``` Properties ---------- > `append?: boolean` Defaults to `false`. Sets the option for the append mode. This option, when `true`, means that writes will append to a file instead of overwriting previous contents. Note that setting `{ write: true, append: true }` has the same effect as setting only `{ append: true }`. > `create?: boolean` Defaults to `false`. Sets the option to allow creating a new file, if one doesn't already exist at the specified path. Requires write or append access to be used. > `createNew?: boolean` Defaults to `false`. If set to `true`, no file, directory, or symlink is allowed to exist at the target location. Requires write or append access to be used. When createNew is set to `true`, create and truncate are ignored. > `mode?: number` Permissions to use if creating the file (defaults to `0o666`, before the process's umask). Ignored on Windows. > `read?: boolean` Defaults to `true`. Sets the option for read access. This option, when `true`, means that the file should be read-able if opened. > `truncate?: boolean` Defaults to `false`. Sets the option for truncating a previous file. If a file is successfully opened with this option set it will truncate the file to `0` size if it already exists. The file must be opened with write access for truncate to work. > `write?: boolean` Defaults to `false`. Sets the option for write access. This option, when `true`, means that the file should be write-able if opened. If the file already exists, any write calls on it will overwrite its contents, by default without truncating it. deno Deno.errors.NotConnected Deno.errors.NotConnected ======================== Raised when the underlying operating system reports an `ENOTCONN` error. ``` class NotConnected extends Error { } ``` Extends ------- > `Error` deno onunload onunload ======== ``` var onunload: ((this: [Window](window), ev: [Event](event)) => any) | null; ``` deno Transformer Transformer =========== ``` interface Transformer <I = any, O = any> {flush?: [TransformStreamDefaultControllerCallback](transformstreamdefaultcontrollercallback)<O>; readableType?: undefined; start?: [TransformStreamDefaultControllerCallback](transformstreamdefaultcontrollercallback)<O>; transform?: [TransformStreamDefaultControllerTransformCallback](transformstreamdefaultcontrollertransformcallback)<I, O>; writableType?: undefined; } ``` Type Parameters --------------- > `I = any` > `O = any` Properties ---------- > `flush?: [TransformStreamDefaultControllerCallback](transformstreamdefaultcontrollercallback)<O>` > `readableType?: undefined` > `start?: [TransformStreamDefaultControllerCallback](transformstreamdefaultcontrollercallback)<O>` > `transform?: [TransformStreamDefaultControllerTransformCallback](transformstreamdefaultcontrollertransformcallback)<I, O>` > `writableType?: undefined` deno FileReaderEventMap FileReaderEventMap ================== ``` interface FileReaderEventMap {abort: [ProgressEvent](progressevent)<[FileReader](filereader)>; error: [ProgressEvent](progressevent)<[FileReader](filereader)>; load: [ProgressEvent](progressevent)<[FileReader](filereader)>; loadend: [ProgressEvent](progressevent)<[FileReader](filereader)>; loadstart: [ProgressEvent](progressevent)<[FileReader](filereader)>; progress: [ProgressEvent](progressevent)<[FileReader](filereader)>; } ``` Properties ---------- > `abort: [ProgressEvent](progressevent)<[FileReader](filereader)>` > `error: [ProgressEvent](progressevent)<[FileReader](filereader)>` > `load: [ProgressEvent](progressevent)<[FileReader](filereader)>` > `loadend: [ProgressEvent](progressevent)<[FileReader](filereader)>` > `loadstart: [ProgressEvent](progressevent)<[FileReader](filereader)>` > `progress: [ProgressEvent](progressevent)<[FileReader](filereader)>` deno AbortController AbortController =============== A controller object that allows you to abort one or more DOM requests as and when desired. ``` class AbortController { readonly signal: [AbortSignal](abortsignal); abort(reason?: any): void; } ``` Properties ---------- > `signal: [AbortSignal](abortsignal)` Returns the AbortSignal object associated with this object. Methods ------- > `abort(reason?: any): void` Invoking this method will set this object's AbortSignal's aborted flag and signal to any observers that the associated activity is to be aborted. deno Deno.SRVRecord Deno.SRVRecord ============== If `resolveDns` is called with "SRV" record type specified, it will return an array of this interface. ``` interface SRVRecord {port: number; priority: number; target: string; weight: number; } ``` Properties ---------- > `port: number` > `priority: number` > `target: string` > `weight: number` deno Body Body ==== ``` interface Body { readonly body: [ReadableStream](readablestream)<Uint8Array> | null; readonly bodyUsed: boolean; arrayBuffer(): Promise<ArrayBuffer>; blob(): Promise<[Blob](blob)>; formData(): Promise<[FormData](formdata)>; json(): Promise<any>; text(): Promise<string>; } ``` Properties ---------- > `readonly body: [ReadableStream](readablestream)<Uint8Array> | null` A simple getter used to expose a `ReadableStream` of the body contents. > `readonly bodyUsed: boolean` Stores a `Boolean` that declares whether the body has been used in a response yet. Methods ------- > `arrayBuffer(): Promise<ArrayBuffer>` Takes a `Response` stream and reads it to completion. It returns a promise that resolves with an `ArrayBuffer`. > `blob(): Promise<[Blob](blob)>` Takes a `Response` stream and reads it to completion. It returns a promise that resolves with a `Blob`. > `formData(): Promise<[FormData](formdata)>` Takes a `Response` stream and reads it to completion. It returns a promise that resolves with a `FormData` object. > `json(): Promise<any>` Takes a `Response` stream and reads it to completion. It returns a promise that resolves with the result of parsing the body text as JSON. > `text(): Promise<string>` Takes a `Response` stream and reads it to completion. It returns a promise that resolves with a `USVString` (text).
programming_docs
deno Deno.PermissionDescriptor Deno.PermissionDescriptor ========================= Permission descriptors which define a permission and can be queried, requested, or revoked. ``` type PermissionDescriptor = | [RunPermissionDescriptor](deno.runpermissiondescriptor) | [ReadPermissionDescriptor](deno.readpermissiondescriptor) | [WritePermissionDescriptor](deno.writepermissiondescriptor) | [NetPermissionDescriptor](deno.netpermissiondescriptor) | [EnvPermissionDescriptor](deno.envpermissiondescriptor) | [SysPermissionDescriptor](deno.syspermissiondescriptor) | [FfiPermissionDescriptor](deno.ffipermissiondescriptor) | [HrtimePermissionDescriptor](deno.hrtimepermissiondescriptor); ``` Type ---- > `[RunPermissionDescriptor](deno.runpermissiondescriptor) | [ReadPermissionDescriptor](deno.readpermissiondescriptor) | [WritePermissionDescriptor](deno.writepermissiondescriptor) | [NetPermissionDescriptor](deno.netpermissiondescriptor) | [EnvPermissionDescriptor](deno.envpermissiondescriptor) | [SysPermissionDescriptor](deno.syspermissiondescriptor) | [FfiPermissionDescriptor](deno.ffipermissiondescriptor) | [HrtimePermissionDescriptor](deno.hrtimepermissiondescriptor)` deno Deno.makeTempDir Deno.makeTempDir ================ Creates a new temporary directory in the default directory for temporary files, unless `dir` is specified. Other optional options include prefixing and suffixing the directory name with `prefix` and `suffix` respectively. This call resolves to the full path to the newly created directory. Multiple programs calling this function simultaneously will create different directories. It is the caller's responsibility to remove the directory when no longer needed. ``` const tempDirName0 = await Deno.makeTempDir(); // e.g. /tmp/2894ea76 const tempDirName1 = await Deno.makeTempDir({ prefix: 'my\_temp' }); // e.g. /tmp/my\_temp339c944d ``` Requires `allow-write` permission. ``` function makeTempDir(options?: [MakeTempOptions](deno.maketempoptions)): Promise<string>; ``` > `makeTempDir(options?: [MakeTempOptions](deno.maketempoptions)): Promise<string>` ### Parameters > `options?: [MakeTempOptions](deno.maketempoptions) optional` ### Return Type > `Promise<string>` deno EventListenerOptions EventListenerOptions ==================== ``` interface EventListenerOptions {capture?: boolean; } ``` Properties ---------- > `capture?: boolean` deno Deno.writeAllSync Deno.writeAllSync ================= deprecated Synchronously write all the content of the array buffer (`arr`) to the writer (`w`). @deprecated Use [`writeAllSync`](https://deno.land/std/streams/conversion.ts?s=writeAllSync) from [`std/streams/conversion.ts`](https://deno.land/std/streams/conversion.ts) instead. `Deno.writeAllSync` will be removed in the future. ``` function writeAllSync(w: [WriterSync](deno.writersync), arr: Uint8Array): void; ``` > `writeAllSync(w: [WriterSync](deno.writersync), arr: Uint8Array): void` ### Parameters > `w: [WriterSync](deno.writersync)` > `arr: Uint8Array` ### Return Type > `void` deno Deno.TlsListener Deno.TlsListener ================ Specialized listener that accepts TLS connections. ``` interface TlsListener extends [Listener](deno.listener), AsyncIterable<[TlsConn](deno.tlsconn)> {[[Symbol.asyncIterator]](): AsyncIterableIterator<[TlsConn](deno.tlsconn)>; accept(): Promise<[TlsConn](deno.tlsconn)>; } ``` Extends ------- > `[Listener](deno.listener)` > `AsyncIterable<[TlsConn](deno.tlsconn)>` Methods ------- > `[[Symbol.asyncIterator]](): AsyncIterableIterator<[TlsConn](deno.tlsconn)>` > `accept(): Promise<[TlsConn](deno.tlsconn)>` Waits for a TLS client to connect and accepts the connection. deno Deno.SetRawOptions Deno.SetRawOptions ================== **UNSTABLE**: new API, yet to be vetted. ``` interface SetRawOptions {cbreak: boolean; } ``` Properties ---------- > `cbreak: boolean` deno DOMException DOMException ============ ``` class DOMException extends Error { constructor(message?: string, name?: string); readonly code: number; readonly message: string; readonly name: string; } ``` Extends ------- > `Error` Constructors ------------ > `new DOMException(message?: string, name?: string)` Properties ---------- > `code: number` > `message: string` > `name: string` deno Deno.readTextFileSync Deno.readTextFileSync ===================== Synchronously reads and returns the entire contents of a file as an UTF-8 decoded string. Reading a directory throws an error. ``` const data = Deno.readTextFileSync("hello.txt"); console.log(data); ``` Requires `allow-read` permission. ``` function readTextFileSync(path: string | [URL](url)): string; ``` > `readTextFileSync(path: string | [URL](url)): string` ### Parameters > `path: string | [URL](url)` ### Return Type > `string` deno WebAssembly.GlobalDescriptor WebAssembly.GlobalDescriptor ============================ The `GlobalDescriptor` describes the options you can pass to `new WebAssembly.Global()`. ``` interface GlobalDescriptor {mutable?: boolean; value: [ValueType](webassembly.valuetype); } ``` Properties ---------- > `mutable?: boolean` > `value: [ValueType](webassembly.valuetype)` deno Deno.close Deno.close ========== Close the given resource ID (`rid`) which has been previously opened, such as via opening or creating a file. Closing a file when you are finished with it is important to avoid leaking resources. ``` const file = await Deno.open("my\_file.txt"); // do work with "file" object Deno.close(file.rid); ``` ``` function close(rid: number): void; ``` > `close(rid: number): void` ### Parameters > `rid: number` ### Return Type > `void` deno CryptoKeyPair CryptoKeyPair ============= The CryptoKeyPair dictionary of the Web Crypto API represents a key pair for an asymmetric cryptography algorithm, also known as a public-key algorithm. ``` interface CryptoKeyPair {privateKey: [CryptoKey](cryptokey); publicKey: [CryptoKey](cryptokey); } ``` ``` var CryptoKeyPair: {prototype: [CryptoKeyPair](cryptokeypair); new (): [CryptoKeyPair](cryptokeypair); }; ``` Properties ---------- > `privateKey: [CryptoKey](cryptokey)` > `publicKey: [CryptoKey](cryptokey)` deno GPUCullMode GPUCullMode =========== ``` type GPUCullMode = "none" | "front" | "back"; ``` Type ---- > `"none" | "front" | "back"` deno Deno.errors Deno.errors =========== A set of error constructors that are raised by Deno APIs. Can be used to provide more specific handling of failures within code which is using Deno APIs. For example, handling attempting to open a file which does not exist: ``` try { const file = await Deno.open("./some/file.txt"); } catch (error) { if (error instanceof Deno.errors.NotFound) { console.error("the file was not found"); } else { // otherwise re-throw throw error; } } ``` Classes ------- | | | | --- | --- | | [Deno.errors.AddrInUse](deno.errors.addrinuse) | Raised when attempting to open a server listener on an address and port that already has a listener. | | [Deno.errors.AddrNotAvailable](deno.errors.addrnotavailable) | Raised when the underlying operating system reports an `EADDRNOTAVAIL` error. | | [Deno.errors.AlreadyExists](deno.errors.alreadyexists) | Raised when trying to create a resource, like a file, that already exits. | | [Deno.errors.BadResource](deno.errors.badresource) | The underlying IO resource is invalid or closed, and so the operation could not be performed. | | [Deno.errors.BrokenPipe](deno.errors.brokenpipe) | Raised when trying to write to a resource and a broken pipe error occurs. This can happen when trying to write directly to `stdout` or `stderr` and the operating system is unable to pipe the output for a reason external to the Deno runtime. | | [Deno.errors.Busy](deno.errors.busy) | Raised when the underlying IO resource is not available because it is being awaited on in another block of code. | | [Deno.errors.ConnectionAborted](deno.errors.connectionaborted) | Raised when the underlying operating system reports an `ECONNABORTED` error. | | [Deno.errors.ConnectionRefused](deno.errors.connectionrefused) | Raised when the underlying operating system reports that a connection to a resource is refused. | | [Deno.errors.ConnectionReset](deno.errors.connectionreset) | Raised when the underlying operating system reports that a connection has been reset. With network servers, it can be a *normal* occurrence where a client will abort a connection instead of properly shutting it down. | | [Deno.errors.Http](deno.errors.http) | Raised in situations where when attempting to load a dynamic import, too many redirects were encountered. | | [Deno.errors.Interrupted](deno.errors.interrupted) | Raised when the underlying operating system reports an `EINTR` error. In many cases, this underlying IO error will be handled internally within Deno, or result in an @{link BadResource} error instead. | | [Deno.errors.InvalidData](deno.errors.invaliddata) | Raised when an operation to returns data that is invalid for the operation being performed. | | [Deno.errors.NotConnected](deno.errors.notconnected) | Raised when the underlying operating system reports an `ENOTCONN` error. | | [Deno.errors.NotFound](deno.errors.notfound) | Raised when the underlying operating system indicates that the file was not found. | | [Deno.errors.NotSupported](deno.errors.notsupported) | Raised when the underlying Deno API is asked to perform a function that is not currently supported. | | [Deno.errors.PermissionDenied](deno.errors.permissiondenied) | Raised when the underlying operating system indicates the current user which the Deno process is running under does not have the appropriate permissions to a file or resource, or the user *did not* provide required `--allow-*` flag. | | [Deno.errors.TimedOut](deno.errors.timedout) | Raised when the underlying operating system reports that an I/O operation has timed out (`ETIMEDOUT`). | | [Deno.errors.UnexpectedEof](deno.errors.unexpectedeof) | Raised when attempting to read bytes from a resource, but the EOF was unexpectedly encountered. | | [Deno.errors.WriteZero](deno.errors.writezero) | Raised when expecting to write to a IO buffer resulted in zero bytes being written. | deno GPUOrigin3D GPUOrigin3D =========== ``` type GPUOrigin3D = number[] | [GPUOrigin3DDict](gpuorigin3ddict); ``` Type ---- > `number[] | [GPUOrigin3DDict](gpuorigin3ddict)` deno GPUTextureViewDimension GPUTextureViewDimension ======================= ``` type GPUTextureViewDimension = | "1d" | "2d" | "2d-array" | "cube" | "cube-array" | "3d"; ``` Type ---- > `"1d" | "2d" | "2d-array" | "cube" | "cube-array" | "3d"` deno Deno.symlinkSync Deno.symlinkSync ================ Creates `newpath` as a symbolic link to `oldpath`. The options.type parameter can be set to `file` or `dir`. This argument is only available on Windows and ignored on other platforms. ``` Deno.symlinkSync("old/name", "new/name"); ``` Requires full `allow-read` and `allow-write` permissions. ``` function symlinkSync( oldpath: string | [URL](url), newpath: string | [URL](url), options?: [SymlinkOptions](deno.symlinkoptions),): void; ``` > `symlinkSync(oldpath: string | [URL](url), newpath: string | [URL](url), options?: [SymlinkOptions](deno.symlinkoptions)): void` ### Parameters > `oldpath: string | [URL](url)` > `newpath: string | [URL](url)` > `options?: [SymlinkOptions](deno.symlinkoptions) optional` ### Return Type > `void` deno ReadableStreamDefaultController ReadableStreamDefaultController =============================== ``` interface ReadableStreamDefaultController <R = any> { readonly desiredSize: number | null; close(): void; enqueue(chunk: R): void; error(error?: any): void; } ``` ``` var ReadableStreamDefaultController: {prototype: [ReadableStreamDefaultController](readablestreamdefaultcontroller); new (): [ReadableStreamDefaultController](readablestreamdefaultcontroller); }; ``` Type Parameters --------------- > `R = any` Properties ---------- > `readonly desiredSize: number | null` Methods ------- > `close(): void` > `enqueue(chunk: R): void` > `error(error?: any): void` deno caches caches ====== ``` var caches: [CacheStorage](cachestorage); ``` ``` var caches: [CacheStorage](cachestorage); ``` deno URLSearchParams URLSearchParams =============== ``` class URLSearchParams { constructor(init?: | string[][] | Record<string, string> | string | [URLSearchParams](urlsearchparams) ); append(name: string, value: string): void; delete(name: string): void; entries(): IterableIterator<[string, string]>; forEach(callbackfn: ( value: string, key: string, parent: this,) => void, thisArg?: any): void; get(name: string): string | null; getAll(name: string): string[]; has(name: string): boolean; keys(): IterableIterator<string>; set(name: string, value: string): void; sort(): void; toString(): string; values(): IterableIterator<string>; [Symbol.iterator](): IterableIterator<[string, string]>; static toString(): string; } ``` Constructors ------------ > `new URLSearchParams(init?: string[][] | Record<string, string> | string | [URLSearchParams](urlsearchparams))` Methods ------- > `append(name: string, value: string): void` Appends a specified key/value pair as a new search parameter. ``` let searchParams = new URLSearchParams(); searchParams.append('name', 'first'); searchParams.append('name', 'second'); ``` > `delete(name: string): void` Deletes the given search parameter and its associated value, from the list of all search parameters. ``` let searchParams = new URLSearchParams([['name', 'value']]); searchParams.delete('name'); ``` > `entries(): IterableIterator<[string, string]>` Returns an iterator allowing to go through all key/value pairs contained in this object. ``` const params = new URLSearchParams([["a", "b"], ["c", "d"]]); for (const [key, value] of params.entries()) { console.log(key, value); } ``` > `forEach(callbackfn: (value: string, key: string, parent: this) => void, thisArg?: any): void` Calls a function for each element contained in this object in place and return undefined. Optionally accepts an object to use as this when executing callback as second argument. ``` const params = new URLSearchParams([["a", "b"], ["c", "d"]]); params.forEach((value, key, parent) => { console.log(value, key, parent); }); ``` > `get(name: string): string | null` Returns the first value associated to the given search parameter. ``` searchParams.get('name'); ``` > `getAll(name: string): string[]` Returns all the values associated with a given search parameter as an array. ``` searchParams.getAll('name'); ``` > `has(name: string): boolean` Returns a Boolean that indicates whether a parameter with the specified name exists. ``` searchParams.has('name'); ``` > `keys(): IterableIterator<string>` Returns an iterator allowing to go through all keys contained in this object. ``` const params = new URLSearchParams([["a", "b"], ["c", "d"]]); for (const key of params.keys()) { console.log(key); } ``` > `set(name: string, value: string): void` Sets the value associated with a given search parameter to the given value. If there were several matching values, this method deletes the others. If the search parameter doesn't exist, this method creates it. ``` searchParams.set('name', 'value'); ``` > `sort(): void` Sort all key/value pairs contained in this object in place and return undefined. The sort order is according to Unicode code points of the keys. ``` searchParams.sort(); ``` > `toString(): string` Returns a query string suitable for use in a URL. ``` searchParams.toString(); ``` > `values(): IterableIterator<string>` Returns an iterator allowing to go through all values contained in this object. ``` const params = new URLSearchParams([["a", "b"], ["c", "d"]]); for (const value of params.values()) { console.log(value); } ``` > `[Symbol.iterator](): IterableIterator<[string, string]>` Returns an iterator allowing to go through all key/value pairs contained in this object. ``` const params = new URLSearchParams([["a", "b"], ["c", "d"]]); for (const [key, value] of params) { console.log(key, value); } ``` Static Methods -------------- > `toString(): string` deno Deno.Buffer Deno.Buffer =========== deprecated A variable-sized buffer of bytes with `read()` and `write()` methods. @deprecated Use [`Buffer`](https://deno.land/std/io/buffer.ts?s=Buffer) from [`std/io/buffer.ts`](https://deno.land/std/io/buffer.ts) instead. `Deno.Buffer` will be removed in the future. ``` class Buffer implements [Reader](deno.reader), [ReaderSync](deno.readersync), [Writer](deno.writer), [WriterSync](deno.writersync) { constructor(ab?: ArrayBuffer); readonly capacity: number; readonly length: number; bytes(options?: {copy?: boolean; }): Uint8Array; empty(): boolean; grow(n: number): void; read(p: Uint8Array): Promise<number | null>; readFrom(r: [Reader](deno.reader)): Promise<number>; readFromSync(r: [ReaderSync](deno.readersync)): number; readSync(p: Uint8Array): number | null; reset(): void; truncate(n: number): void; write(p: Uint8Array): Promise<number>; writeSync(p: Uint8Array): number; } ``` Implements ---------- > `[Reader](deno.reader)` > `[ReaderSync](deno.readersync)` > `[Writer](deno.writer)` > `[WriterSync](deno.writersync)` Constructors ------------ > `new Buffer(ab?: ArrayBuffer)` Properties ---------- > `capacity: number` The read only capacity of the buffer's underlying byte slice, that is, the total space allocated for the buffer's data. > `length: number` A read only number of bytes of the unread portion of the buffer. Methods ------- > `bytes(options?: {copy?: boolean; }): Uint8Array` Returns a slice holding the unread portion of the buffer. The slice is valid for use only until the next buffer modification (that is, only until the next call to a method like `read()`, `write()`, `reset()`, or `truncate()`). If `options.copy` is false the slice aliases the buffer content at least until the next buffer modification, so immediate changes to the slice will affect the result of future reads. @param options Defaults to `{ copy: true }` > `empty(): boolean` Returns whether the unread portion of the buffer is empty. > `grow(n: number): void` Grows the buffer's capacity, if necessary, to guarantee space for another `n` bytes. After `.grow(n)`, at least `n` bytes can be written to the buffer without another allocation. If `n` is negative, `.grow()` will throw. If the buffer can't grow it will throw an error. Based on Go Lang's [Buffer.Grow](https://golang.org/pkg/bytes/#Buffer.Grow). > `read(p: Uint8Array): Promise<number | null>` Reads the next `p.length` bytes from the buffer or until the buffer is drained. Resolves to the number of bytes read. If the buffer has no data to return, resolves to EOF (`null`). NOTE: This methods reads bytes synchronously; it's provided for compatibility with `Reader` interfaces. > `readFrom(r: [Reader](deno.reader)): Promise<number>` Reads data from `r` until EOF (`null`) and appends it to the buffer, growing the buffer as needed. It resolves to the number of bytes read. If the buffer becomes too large, `.readFrom()` will reject with an error. Based on Go Lang's [Buffer.ReadFrom](https://golang.org/pkg/bytes/#Buffer.ReadFrom). > `readFromSync(r: [ReaderSync](deno.readersync)): number` Reads data from `r` until EOF (`null`) and appends it to the buffer, growing the buffer as needed. It returns the number of bytes read. If the buffer becomes too large, `.readFromSync()` will throw an error. Based on Go Lang's [Buffer.ReadFrom](https://golang.org/pkg/bytes/#Buffer.ReadFrom). > `readSync(p: Uint8Array): number | null` Reads the next `p.length` bytes from the buffer or until the buffer is drained. Returns the number of bytes read. If the buffer has no data to return, the return is EOF (`null`). > `reset(): void` Resets the buffer to be empty, but it retains the underlying storage for use by future writes. `.reset()` is the same as `.truncate(0)`. > `truncate(n: number): void` Discards all but the first `n` unread bytes from the buffer but continues to use the same allocated storage. It throws if `n` is negative or greater than the length of the buffer. > `write(p: Uint8Array): Promise<number>` NOTE: This methods writes bytes synchronously; it's provided for compatibility with `Writer` interface. > `writeSync(p: Uint8Array): number`
programming_docs
deno Deno.linkSync Deno.linkSync ============= Synchronously creates `newpath` as a hard link to `oldpath`. ``` Deno.linkSync("old/name", "new/name"); ``` Requires `allow-read` and `allow-write` permissions. ``` function linkSync(oldpath: string, newpath: string): void; ``` > `linkSync(oldpath: string, newpath: string): void` ### Parameters > `oldpath: string` > `newpath: string` ### Return Type > `void` deno GPUAddressMode GPUAddressMode ============== ``` type GPUAddressMode = "clamp-to-edge" | "repeat" | "mirror-repeat"; ``` Type ---- > `"clamp-to-edge" | "repeat" | "mirror-repeat"` deno Deno.TcpConn Deno.TcpConn ============ ``` interface TcpConn extends [Conn](deno.conn) {setKeepAlive(keepalive?: boolean): void; setNoDelay(nodelay?: boolean): void; } ``` Extends ------- > `[Conn](deno.conn)` Methods ------- > `setKeepAlive(keepalive?: boolean): void` **UNSTABLE**: new API, see <https://github.com/denoland/deno/issues/13617>. Enable/disable keep-alive functionality. > `setNoDelay(nodelay?: boolean): void` **UNSTABLE**: new API, see <https://github.com/denoland/deno/issues/13617>. Enable/disable the use of Nagle's algorithm. Defaults to true. deno GPUSupportedFeatures GPUSupportedFeatures ==================== ``` class GPUSupportedFeatures { size: number; entries(): IterableIterator<[[GPUFeatureName](gpufeaturename), [GPUFeatureName](gpufeaturename)]>; forEach(callbackfn: ( value: [GPUFeatureName](gpufeaturename), value2: [GPUFeatureName](gpufeaturename), set: Set<[GPUFeatureName](gpufeaturename)>,) => void, thisArg?: any): void; has(value: [GPUFeatureName](gpufeaturename)): boolean; keys(): IterableIterator<[GPUFeatureName](gpufeaturename)>; values(): IterableIterator<[GPUFeatureName](gpufeaturename)>; [ Symbol .iterator ](): IterableIterator<[GPUFeatureName](gpufeaturename)>; } ``` Properties ---------- > `size: number` Methods ------- > `entries(): IterableIterator<[[GPUFeatureName](gpufeaturename), [GPUFeatureName](gpufeaturename)]>` > `forEach(callbackfn: (value: [GPUFeatureName](gpufeaturename), value2: [GPUFeatureName](gpufeaturename), set: Set<[GPUFeatureName](gpufeaturename)>) => void, thisArg?: any): void` > `has(value: [GPUFeatureName](gpufeaturename)): boolean` > `keys(): IterableIterator<[GPUFeatureName](gpufeaturename)>` > `values(): IterableIterator<[GPUFeatureName](gpufeaturename)>` > `[ Symbol .iterator ](): IterableIterator<[GPUFeatureName](gpufeaturename)>` deno Deno.makeTempFile Deno.makeTempFile ================= Creates a new temporary file in the default directory for temporary files, unless `dir` is specified. Other options include prefixing and suffixing the directory name with `prefix` and `suffix` respectively. This call resolves to the full path to the newly created file. Multiple programs calling this function simultaneously will create different files. It is the caller's responsibility to remove the file when no longer needed. ``` const tmpFileName0 = await Deno.makeTempFile(); // e.g. /tmp/419e0bf2 const tmpFileName1 = await Deno.makeTempFile({ prefix: 'my\_temp' }); // e.g. /tmp/my\_temp754d3098 ``` Requires `allow-write` permission. ``` function makeTempFile(options?: [MakeTempOptions](deno.maketempoptions)): Promise<string>; ``` > `makeTempFile(options?: [MakeTempOptions](deno.maketempoptions)): Promise<string>` ### Parameters > `options?: [MakeTempOptions](deno.maketempoptions) optional` ### Return Type > `Promise<string>` deno Deno.mkdir Deno.mkdir ========== Creates a new directory with the specified path. ``` await Deno.mkdir("new\_dir"); await Deno.mkdir("nested/directories", { recursive: true }); await Deno.mkdir("restricted\_access\_dir", { mode: 0o700 }); ``` Defaults to throwing error if the directory already exists. Requires `allow-write` permission. ``` function mkdir(path: string | [URL](url), options?: [MkdirOptions](deno.mkdiroptions)): Promise<void>; ``` > `mkdir(path: string | [URL](url), options?: [MkdirOptions](deno.mkdiroptions)): Promise<void>` ### Parameters > `path: string | [URL](url)` > `options?: [MkdirOptions](deno.mkdiroptions) optional` ### Return Type > `Promise<void>` deno Deno.NAPTRRecord Deno.NAPTRRecord ================ If `resolveDns` is called with "NAPTR" record type specified, it will return an array of this interface. ``` interface NAPTRRecord {flags: string; order: number; preference: number; regexp: string; replacement: string; services: string; } ``` Properties ---------- > `flags: string` > `order: number` > `preference: number` > `regexp: string` > `replacement: string` > `services: string` deno MessagePortEventMap MessagePortEventMap =================== ``` interface MessagePortEventMap {message: [MessageEvent](messageevent); messageerror: [MessageEvent](messageevent); } ``` Properties ---------- > `message: [MessageEvent](messageevent)` > `messageerror: [MessageEvent](messageevent)` deno Deno.ReadFileOptions Deno.ReadFileOptions ==================== Options which can be set when using [`Deno.readFile`](deno#readFile) or [`Deno.readFileSync`](deno#readFileSync). ``` interface ReadFileOptions {signal?: [AbortSignal](abortsignal); } ``` Properties ---------- > `signal?: [AbortSignal](abortsignal)` An abort signal to allow cancellation of the file read operation. If the signal becomes aborted the readFile operation will be stopped and the promise returned will be rejected with an AbortError. deno WritableStreamErrorCallback WritableStreamErrorCallback =========================== ``` interface WritableStreamErrorCallback {(reason: any): void | PromiseLike<void>;} ``` Call Signatures --------------- > `(reason: any): void | PromiseLike<void>` deno RsaPssParams RsaPssParams ============ ``` interface RsaPssParams extends [Algorithm](algorithm) {saltLength: number; } ``` Extends ------- > `[Algorithm](algorithm)` Properties ---------- > `saltLength: number` deno Deno.exit Deno.exit ========= Exit the Deno process with optional exit code. If no exit code is supplied then Deno will exit with return code of `0`. In worker contexts this is an alias to `self.close();`. ``` Deno.exit(5); ``` ``` function exit(code?: number): never; ``` > `exit(code?: number): never` ### Parameters > `code?: number optional` ### Return Type > `never` deno Deno.fdatasyncSync Deno.fdatasyncSync ================== Synchronously flushes any pending data operations of the given file stream to disk. ``` const file = Deno.openSync( "my\_file.txt", { read: true, write: true, create: true }, ); Deno.writeSync(file.rid, new TextEncoder().encode("Hello World")); Deno.fdatasyncSync(file.rid); console.log(new TextDecoder().decode(Deno.readFileSync("my\_file.txt"))); // Hello World ``` ``` function fdatasyncSync(rid: number): void; ``` > `fdatasyncSync(rid: number): void` ### Parameters > `rid: number` ### Return Type > `void` deno GPUVertexBufferLayout GPUVertexBufferLayout ===================== ``` interface GPUVertexBufferLayout {arrayStride: number; attributes: [GPUVertexAttribute](gpuvertexattribute)[]; stepMode?: [GPUVertexStepMode](gpuvertexstepmode); } ``` Properties ---------- > `arrayStride: number` > `attributes: [GPUVertexAttribute](gpuvertexattribute)[]` > `stepMode?: [GPUVertexStepMode](gpuvertexstepmode)` deno GPUCompareFunction GPUCompareFunction ================== ``` type GPUCompareFunction = | "never" | "less" | "equal" | "less-equal" | "greater" | "not-equal" | "greater-equal" | "always"; ``` Type ---- > `"never" | "less" | "equal" | "less-equal" | "greater" | "not-equal" | "greater-equal" | "always"` deno Deno.RecordType Deno.RecordType =============== The type of the resource record. Only the listed types are supported currently. ``` type RecordType = | "A" | "AAAA" | "ANAME" | "CAA" | "CNAME" | "MX" | "NAPTR" | "NS" | "PTR" | "SOA" | "SRV" | "TXT"; ``` Type ---- > `"A" | "AAAA" | "ANAME" | "CAA" | "CNAME" | "MX" | "NAPTR" | "NS" | "PTR" | "SOA" | "SRV" | "TXT"` deno EcdhKeyDeriveParams EcdhKeyDeriveParams =================== ``` interface EcdhKeyDeriveParams extends [Algorithm](algorithm) {public: [CryptoKey](cryptokey); } ``` Extends ------- > `[Algorithm](algorithm)` Properties ---------- > `public: [CryptoKey](cryptokey)` deno Deno.FsFile Deno.FsFile =========== The Deno abstraction for reading and writing files. This is the most straight forward way of handling files within Deno and is recommended over using the discreet functions within the `Deno` namespace. ``` const file = await Deno.open("/foo/bar.txt", { read: true }); const fileInfo = await file.stat(); if (fileInfo.isFile) { const buf = new Uint8Array(100); const numberOfBytesRead = await file.read(buf); // 11 bytes const text = new TextDecoder().decode(buf); // "hello world" } file.close(); ``` ``` class FsFile implements [Reader](deno.reader), [ReaderSync](deno.readersync), [Writer](deno.writer), [WriterSync](deno.writersync), [Seeker](deno.seeker), [SeekerSync](deno.seekersync), [Closer](deno.closer) { constructor(rid: number); readonly readable: [ReadableStream](readablestream)<Uint8Array>; readonly rid: number; readonly writable: [WritableStream](writablestream)<Uint8Array>; close(): void; read(p: Uint8Array): Promise<number | null>; readSync(p: Uint8Array): number | null; seek(offset: number, whence: [SeekMode](deno.seekmode)): Promise<number>; seekSync(offset: number, whence: [SeekMode](deno.seekmode)): number; stat(): Promise<[FileInfo](deno.fileinfo)>; statSync(): [FileInfo](deno.fileinfo); truncate(len?: number): Promise<void>; truncateSync(len?: number): void; write(p: Uint8Array): Promise<number>; writeSync(p: Uint8Array): number; } ``` Implements ---------- > `[Reader](deno.reader)` > `[ReaderSync](deno.readersync)` > `[Writer](deno.writer)` > `[WriterSync](deno.writersync)` > `[Seeker](deno.seeker)` > `[SeekerSync](deno.seekersync)` > `[Closer](deno.closer)` Constructors ------------ > `new FsFile(rid: number)` The constructor which takes a resource ID. Generally `FsFile` should not be constructed directly. Instead use [`Deno.open`](deno#open) or [`Deno.openSync`](deno#openSync) to create a new instance of `FsFile`. Properties ---------- > `readable: [ReadableStream](readablestream)<Uint8Array>` A [`ReadableStream`](readablestream) instance representing to the byte contents of the file. This makes it easy to interoperate with other web streams based APIs. ``` const file = await Deno.open("my\_file.txt", { read: true }); const decoder = new TextDecoder(); for await (const chunk of file.readable) { console.log(decoder.decode(chunk)); } file.close(); ``` > `rid: number` The resource ID associated with the file instance. The resource ID should be considered an opaque reference to resource. > `writable: [WritableStream](writablestream)<Uint8Array>` A [`WritableStream`](writablestream) instance to write the contents of the file. This makes it easy to interoperate with other web streams based APIs. ``` const items = ["hello", "world"]; const file = await Deno.open("my\_file.txt", { write: true }); const encoder = new TextEncoder(); const writer = file.writable.getWriter(); for (const item of items) { await writer.write(encoder.encode(item)); } file.close(); ``` Methods ------- > `close(): void` Close the file. Closing a file when you are finished with it is important to avoid leaking resources. ``` const file = await Deno.open("my\_file.txt"); // do work with "file" object file.close(); ``` > `read(p: Uint8Array): Promise<number | null>` Read the file into an array buffer (`p`). Resolves to either the number of bytes read during the operation or EOF (`null`) if there was nothing more to read. It is possible for a read to successfully return with `0` bytes. This does not indicate EOF. **It is not guaranteed that the full buffer will be read in a single call.** ``` // if "/foo/bar.txt" contains the text "hello world": const file = await Deno.open("/foo/bar.txt"); const buf = new Uint8Array(100); const numberOfBytesRead = await file.read(buf); // 11 bytes const text = new TextDecoder().decode(buf); // "hello world" file.close(); ``` > `readSync(p: Uint8Array): number | null` Synchronously read from the file into an array buffer (`p`). Returns either the number of bytes read during the operation or EOF (`null`) if there was nothing more to read. It is possible for a read to successfully return with `0` bytes. This does not indicate EOF. **It is not guaranteed that the full buffer will be read in a single call.** ``` // if "/foo/bar.txt" contains the text "hello world": const file = Deno.openSync("/foo/bar.txt"); const buf = new Uint8Array(100); const numberOfBytesRead = file.readSync(buf); // 11 bytes const text = new TextDecoder().decode(buf); // "hello world" file.close(); ``` > `seek(offset: number, whence: [SeekMode](deno.seekmode)): Promise<number>` Seek to the given `offset` under mode given by `whence`. The call resolves to the new position within the resource (bytes from the start). ``` // Given file pointing to file with "Hello world", which is 11 bytes long: const file = await Deno.open( "hello.txt", { read: true, write: true, truncate: true, create: true }, ); await file.write(new TextEncoder().encode("Hello world")); // advance cursor 6 bytes const cursorPosition = await file.seek(6, Deno.SeekMode.Start); console.log(cursorPosition); // 6 const buf = new Uint8Array(100); await file.read(buf); console.log(new TextDecoder().decode(buf)); // "world" file.close(); ``` The seek modes work as follows: ``` // Given file.rid pointing to file with "Hello world", which is 11 bytes long: const file = await Deno.open( "hello.txt", { read: true, write: true, truncate: true, create: true }, ); await file.write(new TextEncoder().encode("Hello world")); // Seek 6 bytes from the start of the file console.log(await file.seek(6, Deno.SeekMode.Start)); // "6" // Seek 2 more bytes from the current position console.log(await file.seek(2, Deno.SeekMode.Current)); // "8" // Seek backwards 2 bytes from the end of the file console.log(await file.seek(-2, Deno.SeekMode.End)); // "9" (e.g. 11-2) ``` > `seekSync(offset: number, whence: [SeekMode](deno.seekmode)): number` Synchronously seek to the given `offset` under mode given by `whence`. The new position within the resource (bytes from the start) is returned. ``` const file = Deno.openSync( "hello.txt", { read: true, write: true, truncate: true, create: true }, ); file.writeSync(new TextEncoder().encode("Hello world")); // advance cursor 6 bytes const cursorPosition = file.seekSync(6, Deno.SeekMode.Start); console.log(cursorPosition); // 6 const buf = new Uint8Array(100); file.readSync(buf); console.log(new TextDecoder().decode(buf)); // "world" file.close(); ``` The seek modes work as follows: ``` // Given file.rid pointing to file with "Hello world", which is 11 bytes long: const file = Deno.openSync( "hello.txt", { read: true, write: true, truncate: true, create: true }, ); file.writeSync(new TextEncoder().encode("Hello world")); // Seek 6 bytes from the start of the file console.log(file.seekSync(6, Deno.SeekMode.Start)); // "6" // Seek 2 more bytes from the current position console.log(file.seekSync(2, Deno.SeekMode.Current)); // "8" // Seek backwards 2 bytes from the end of the file console.log(file.seekSync(-2, Deno.SeekMode.End)); // "9" (e.g. 11-2) file.close(); ``` > `stat(): Promise<[FileInfo](deno.fileinfo)>` Resolves to a [`Deno.FileInfo`](deno#FileInfo) for the file. ``` import { assert } from "https://deno.land/std/testing/asserts.ts"; const file = await Deno.open("hello.txt"); const fileInfo = await file.stat(); assert(fileInfo.isFile); file.close(); ``` > `statSync(): [FileInfo](deno.fileinfo)` Synchronously returns a [`Deno.FileInfo`](deno#FileInfo) for the file. ``` import { assert } from "https://deno.land/std/testing/asserts.ts"; const file = Deno.openSync("hello.txt") const fileInfo = file.statSync(); assert(fileInfo.isFile); file.close(); ``` > `truncate(len?: number): Promise<void>` Truncates (or extends) the file to reach the specified `len`. If `len` is not specified, then the entire file contents are truncated. ### Truncate the entire file ``` const file = await Deno.open("my\_file.txt", { write: true }); await file.truncate(); file.close(); ``` ### Truncate part of the file ``` // if "my\_file.txt" contains the text "hello world": const file = await Deno.open("my\_file.txt", { write: true }); await file.truncate(7); const buf = new Uint8Array(100); await file.read(buf); const text = new TextDecoder().decode(buf); // "hello w" file.close(); ``` > `truncateSync(len?: number): void` Synchronously truncates (or extends) the file to reach the specified `len`. If `len` is not specified, then the entire file contents are truncated. ### Truncate the entire file ``` const file = Deno.openSync("my\_file.txt", { write: true }); file.truncateSync(); file.close(); ``` ### Truncate part of the file ``` // if "my\_file.txt" contains the text "hello world": const file = Deno.openSync("my\_file.txt", { write: true }); file.truncateSync(7); const buf = new Uint8Array(100); file.readSync(buf); const text = new TextDecoder().decode(buf); // "hello w" file.close(); ``` > `write(p: Uint8Array): Promise<number>` Write the contents of the array buffer (`p`) to the file. Resolves to the number of bytes written. **It is not guaranteed that the full buffer will be written in a single call.** ``` const encoder = new TextEncoder(); const data = encoder.encode("Hello world"); const file = await Deno.open("/foo/bar.txt", { write: true }); const bytesWritten = await file.write(data); // 11 file.close(); ``` > `writeSync(p: Uint8Array): number` Synchronously write the contents of the array buffer (`p`) to the file. Returns the number of bytes written. **It is not guaranteed that the full buffer will be written in a single call.** ``` const encoder = new TextEncoder(); const data = encoder.encode("Hello world"); const file = Deno.openSync("/foo/bar.txt", { write: true }); const bytesWritten = file.writeSync(data); // 11 file.close(); ``` deno Cache Cache ===== ``` interface Cache {delete(request: [RequestInfo](requestinfo) | [URL](url), options?: [CacheQueryOptions](cachequeryoptions)): Promise<boolean>; match(request: [RequestInfo](requestinfo) | [URL](url), options?: [CacheQueryOptions](cachequeryoptions)): Promise<[Response](response) | undefined>; put(request: [RequestInfo](requestinfo) | [URL](url), response: [Response](response)): Promise<void>; } ``` ``` var Cache: {prototype: [Cache](cache); new (name: string): [Cache](cache); }; ``` Methods ------- > `delete(request: [RequestInfo](requestinfo) | [URL](url), options?: [CacheQueryOptions](cachequeryoptions)): Promise<boolean>` Delete cache object matching the provided request. How is the API different from browsers? 1. You cannot delete cache objects using by relative paths. 2. You cannot pass options like `ignoreVary`, `ignoreMethod`, `ignoreSearch`. > `match(request: [RequestInfo](requestinfo) | [URL](url), options?: [CacheQueryOptions](cachequeryoptions)): Promise<[Response](response) | undefined>` Return cache object matching the provided request. How is the API different from browsers? 1. You cannot match cache objects using by relative paths. 2. You cannot pass options like `ignoreVary`, `ignoreMethod`, `ignoreSearch`. > `put(request: [RequestInfo](requestinfo) | [URL](url), response: [Response](response)): Promise<void>` Put the provided request/response into the cache. How is the API different from browsers? 1. You cannot match cache objects using by relative paths. 2. You cannot pass options like `ignoreVary`, `ignoreMethod`, `ignoreSearch`.
programming_docs
deno GPUIndexFormat GPUIndexFormat ============== ``` type GPUIndexFormat = "uint16" | "uint32"; ``` Type ---- > `"uint16" | "uint32"` deno GPUComputePassEncoder GPUComputePassEncoder ===================== ``` class GPUComputePassEncoder implements [GPUObjectBase](gpuobjectbase), [GPUProgrammablePassEncoder](gpuprogrammablepassencoder) { label: string; beginPipelineStatisticsQuery(querySet: [GPUQuerySet](gpuqueryset), queryIndex: number): undefined; dispatchWorkgroups( x: number, y?: number, z?: number,): undefined; dispatchWorkgroupsIndirect(indirectBuffer: [GPUBuffer](gpubuffer), indirectOffset: number): undefined; end(): undefined; endPipelineStatisticsQuery(): undefined; insertDebugMarker(markerLabel: string): undefined; popDebugGroup(): undefined; pushDebugGroup(groupLabel: string): undefined; setBindGroup( index: number, bindGroup: [GPUBindGroup](gpubindgroup), dynamicOffsets?: number[],): undefined; setBindGroup( index: number, bindGroup: [GPUBindGroup](gpubindgroup), dynamicOffsetsData: Uint32Array, dynamicOffsetsDataStart: number, dynamicOffsetsDataLength: number,): undefined; setPipeline(pipeline: [GPUComputePipeline](gpucomputepipeline)): undefined; writeTimestamp(querySet: [GPUQuerySet](gpuqueryset), queryIndex: number): undefined; } ``` Implements ---------- > `[GPUObjectBase](gpuobjectbase)` > `[GPUProgrammablePassEncoder](gpuprogrammablepassencoder)` Properties ---------- > `label: string` Methods ------- > `beginPipelineStatisticsQuery(querySet: [GPUQuerySet](gpuqueryset), queryIndex: number): undefined` > `dispatchWorkgroups(x: number, y?: number, z?: number): undefined` > `dispatchWorkgroupsIndirect(indirectBuffer: [GPUBuffer](gpubuffer), indirectOffset: number): undefined` > `end(): undefined` > `endPipelineStatisticsQuery(): undefined` > `insertDebugMarker(markerLabel: string): undefined` > `popDebugGroup(): undefined` > `pushDebugGroup(groupLabel: string): undefined` > `setBindGroup(index: number, bindGroup: [GPUBindGroup](gpubindgroup), dynamicOffsets?: number[]): undefined` > `setBindGroup(index: number, bindGroup: [GPUBindGroup](gpubindgroup), dynamicOffsetsData: Uint32Array, dynamicOffsetsDataStart: number, dynamicOffsetsDataLength: number): undefined` > `setPipeline(pipeline: [GPUComputePipeline](gpucomputepipeline)): undefined` > `writeTimestamp(querySet: [GPUQuerySet](gpuqueryset), queryIndex: number): undefined` deno GPUTexture GPUTexture ========== ``` class GPUTexture implements [GPUObjectBase](gpuobjectbase) { label: string; createView(descriptor?: [GPUTextureViewDescriptor](gputextureviewdescriptor)): [GPUTextureView](gputextureview); destroy(): undefined; } ``` Implements ---------- > `[GPUObjectBase](gpuobjectbase)` Properties ---------- > `label: string` Methods ------- > `createView(descriptor?: [GPUTextureViewDescriptor](gputextureviewdescriptor)): [GPUTextureView](gputextureview)` > `destroy(): undefined` deno Deno.PermissionState Deno.PermissionState ==================== The current status of the permission: * `"granted"` - the permission has been granted. * `"denied"` - the permission has been explicitly denied. * `"prompt"` - the permission has not explicitly granted nor denied. ``` type PermissionState = "granted" | "denied" | "prompt"; ``` Type ---- > `"granted" | "denied" | "prompt"` deno Deno.errors.ConnectionReset Deno.errors.ConnectionReset =========================== Raised when the underlying operating system reports that a connection has been reset. With network servers, it can be a *normal* occurrence where a client will abort a connection instead of properly shutting it down. ``` class ConnectionReset extends Error { } ``` Extends ------- > `Error` deno TextDecoder TextDecoder =========== ``` interface TextDecoder { readonly encoding: string; readonly fatal: boolean; readonly ignoreBOM: boolean; decode(input?: [BufferSource](buffersource), options?: [TextDecodeOptions](textdecodeoptions)): string; } ``` ``` var TextDecoder: {prototype: [TextDecoder](textdecoder); new (label?: string, options?: [TextDecoderOptions](textdecoderoptions)): [TextDecoder](textdecoder); }; ``` Properties ---------- > `readonly encoding: string` Returns encoding's name, lowercased. > `readonly fatal: boolean` Returns `true` if error mode is "fatal", and `false` otherwise. > `readonly ignoreBOM: boolean` Returns `true` if ignore BOM flag is set, and `false` otherwise. Methods ------- > `decode(input?: [BufferSource](buffersource), options?: [TextDecodeOptions](textdecodeoptions)): string` Returns the result of running encoding's decoder. deno UnderlyingSource UnderlyingSource ================ ``` interface UnderlyingSource <R = any> {cancel?: [ReadableStreamErrorCallback](readablestreamerrorcallback); pull?: [ReadableStreamDefaultControllerCallback](readablestreamdefaultcontrollercallback)<R>; start?: [ReadableStreamDefaultControllerCallback](readablestreamdefaultcontrollercallback)<R>; type?: undefined; } ``` Type Parameters --------------- > `R = any` Properties ---------- > `cancel?: [ReadableStreamErrorCallback](readablestreamerrorcallback)` > `pull?: [ReadableStreamDefaultControllerCallback](readablestreamdefaultcontrollercallback)<R>` > `start?: [ReadableStreamDefaultControllerCallback](readablestreamdefaultcontrollercallback)<R>` > `type?: undefined` deno GPURenderBundle GPURenderBundle =============== ``` class GPURenderBundle implements [GPUObjectBase](gpuobjectbase) { label: string; } ``` Implements ---------- > `[GPUObjectBase](gpuobjectbase)` Properties ---------- > `label: string` deno AddEventListenerOptions AddEventListenerOptions ======================= ``` interface AddEventListenerOptions extends [EventListenerOptions](eventlisteneroptions) {once?: boolean; passive?: boolean; signal?: [AbortSignal](abortsignal); } ``` Extends ------- > `[EventListenerOptions](eventlisteneroptions)` Properties ---------- > `once?: boolean` > `passive?: boolean` > `signal?: [AbortSignal](abortsignal)` deno Deno.errors.UnexpectedEof Deno.errors.UnexpectedEof ========================= Raised when attempting to read bytes from a resource, but the EOF was unexpectedly encountered. ``` class UnexpectedEof extends Error { } ``` Extends ------- > `Error` deno Deno.open Deno.open ========= Open a file and resolve to an instance of [`Deno.FsFile`](deno#FsFile). The file does not need to previously exist if using the `create` or `createNew` open options. It is the caller's responsibility to close the file when finished with it. ``` const file = await Deno.open("/foo/bar.txt", { read: true, write: true }); // Do work with file Deno.close(file.rid); ``` Requires `allow-read` and/or `allow-write` permissions depending on options. ``` function open(path: string | [URL](url), options?: [OpenOptions](deno.openoptions)): Promise<[FsFile](deno.fsfile)>; ``` > `open(path: string | [URL](url), options?: [OpenOptions](deno.openoptions)): Promise<[FsFile](deno.fsfile)>` ### Parameters > `path: string | [URL](url)` > `options?: [OpenOptions](deno.openoptions) optional` ### Return Type > `Promise<[FsFile](deno.fsfile)>` deno GPUDepthStencilState GPUDepthStencilState ==================== ``` interface GPUDepthStencilState {depthBias?: number; depthBiasClamp?: number; depthBiasSlopeScale?: number; depthCompare?: [GPUCompareFunction](gpucomparefunction); depthWriteEnabled?: boolean; format: [GPUTextureFormat](gputextureformat); stencilBack?: [GPUStencilFaceState](gpustencilfacestate); stencilFront?: [GPUStencilFaceState](gpustencilfacestate); stencilReadMask?: number; stencilWriteMask?: number; } ``` Properties ---------- > `depthBias?: number` > `depthBiasClamp?: number` > `depthBiasSlopeScale?: number` > `depthCompare?: [GPUCompareFunction](gpucomparefunction)` > `depthWriteEnabled?: boolean` > `format: [GPUTextureFormat](gputextureformat)` > `stencilBack?: [GPUStencilFaceState](gpustencilfacestate)` > `stencilFront?: [GPUStencilFaceState](gpustencilfacestate)` > `stencilReadMask?: number` > `stencilWriteMask?: number` deno Deno.CAARecord Deno.CAARecord ============== If `resolveDns` is called with "CAA" record type specified, it will return an array of this interface. ``` interface CAARecord {critical: boolean; tag: string; value: string; } ``` Properties ---------- > `critical: boolean` > `tag: string` > `value: string` deno GPUBindGroupDescriptor GPUBindGroupDescriptor ====================== ``` interface GPUBindGroupDescriptor extends [GPUObjectDescriptorBase](gpuobjectdescriptorbase) {entries: [GPUBindGroupEntry](gpubindgroupentry)[]; layout: [GPUBindGroupLayout](gpubindgrouplayout); } ``` Extends ------- > `[GPUObjectDescriptorBase](gpuobjectdescriptorbase)` Properties ---------- > `entries: [GPUBindGroupEntry](gpubindgroupentry)[]` > `layout: [GPUBindGroupLayout](gpubindgrouplayout)` deno Deno.copy Deno.copy ========= deprecated Copies from `src` to `dst` until either EOF (`null`) is read from `src` or an error occurs. It resolves to the number of bytes copied or rejects with the first error encountered while copying. @deprecated Use [`copy`](https://deno.land/std/streams/conversion.ts?s=copy) from [`std/streams/conversion.ts`](https://deno.land/std/streams/conversion.ts) instead. `Deno.copy` will be removed in the future. ``` function copy( src: [Reader](deno.reader), dst: [Writer](deno.writer), options?: {bufSize?: number; },): Promise<number>; ``` > `copy(src: [Reader](deno.reader), dst: [Writer](deno.writer), options?: {bufSize?: number; }): Promise<number>` ### Parameters > `src: [Reader](deno.reader)` The source to copy from > `dst: [Writer](deno.writer)` The destination to copy to > `options?: {bufSize?: number; } optional` Can be used to tune size of the buffer. Default size is 32kB ### Return Type > `Promise<number>` deno BinaryType BinaryType ========== ``` type BinaryType = "arraybuffer" | "blob"; ``` Type ---- > `"arraybuffer" | "blob"` deno GPUMapModeFlags GPUMapModeFlags =============== ``` type GPUMapModeFlags = number; ``` Type ---- > `number` deno URL URL === The URL interface represents an object providing static methods used for creating object URLs. ``` class URL { constructor(url: string | [URL](url), base?: string | [URL](url)); hash: string; host: string; hostname: string; href: string; readonly origin: string; password: string; pathname: string; port: string; protocol: string; search: string; readonly searchParams: [URLSearchParams](urlsearchparams); username: string; toJSON(): string; toString(): string; static createObjectURL(blob: [Blob](blob)): string; static revokeObjectURL(url: string): void; } ``` Constructors ------------ > `new URL(url: string | [URL](url), base?: string | [URL](url))` Properties ---------- > `hash: string` > `host: string` > `hostname: string` > `href: string` > `origin: string` > `password: string` > `pathname: string` > `port: string` > `protocol: string` > `search: string` > `searchParams: [URLSearchParams](urlsearchparams)` > `username: string` Methods ------- > `toJSON(): string` > `toString(): string` Static Methods -------------- > `createObjectURL(blob: [Blob](blob)): string` > `revokeObjectURL(url: string): void` deno ReadableStreamReadValueResult ReadableStreamReadValueResult ============================= ``` interface ReadableStreamReadValueResult <T> {done: false; value: T; } ``` Type Parameters --------------- > `T` Properties ---------- > `done: false` > `value: T` deno PipeOptions PipeOptions =========== ``` interface PipeOptions {preventAbort?: boolean; preventCancel?: boolean; preventClose?: boolean; signal?: [AbortSignal](abortsignal); } ``` Properties ---------- > `preventAbort?: boolean` > `preventCancel?: boolean` > `preventClose?: boolean` > `signal?: [AbortSignal](abortsignal)` deno Deno.fstat Deno.fstat ========== Returns a `Deno.FileInfo` for the given file stream. ``` import { assert } from "https://deno.land/std/testing/asserts.ts"; const file = await Deno.open("file.txt", { read: true }); const fileInfo = await Deno.fstat(file.rid); assert(fileInfo.isFile); ``` ``` function fstat(rid: number): Promise<[FileInfo](deno.fileinfo)>; ``` > `fstat(rid: number): Promise<[FileInfo](deno.fileinfo)>` ### Parameters > `rid: number` ### Return Type > `Promise<[FileInfo](deno.fileinfo)>` deno GPUError GPUError ======== ``` class GPUError { readonly message: string; } ``` Properties ---------- > `message: string` deno Deno.kill Deno.kill ========= Send a signal to process under given `pid`. If `pid` is negative, the signal will be sent to the process group identified by `pid`. An error will be thrown if a negative `pid` is used on Windows. ``` const p = Deno.run({ cmd: ["sleep", "10000"] }); Deno.kill(p.pid, "SIGINT"); ``` Requires `allow-run` permission. ``` function kill(pid: number, signo: [Signal](deno.signal)): void; ``` > `kill(pid: number, signo: [Signal](deno.signal)): void` ### Parameters > `pid: number` > `signo: [Signal](deno.signal)` ### Return Type > `void` deno StructuredSerializeOptions StructuredSerializeOptions ========================== ``` interface StructuredSerializeOptions {transfer?: [Transferable](transferable)[]; } ``` Properties ---------- > `transfer?: [Transferable](transferable)[]` deno GPUTextureUsageFlags GPUTextureUsageFlags ==================== ``` type GPUTextureUsageFlags = number; ``` Type ---- > `number` deno Performance Performance =========== Deno supports [User Timing Level 3](https://w3c.github.io/user-timing) which is not widely supported yet in other runtimes. Check out the [Performance API](https://developer.mozilla.org/en-US/docs/Web/API/Performance) documentation on MDN for further information about how to use the API. ``` interface Performance {mark(markName: string, options?: [PerformanceMarkOptions](performancemarkoptions)): [PerformanceMark](performancemark); measure(measureName: string, options?: [PerformanceMeasureOptions](performancemeasureoptions)): [PerformanceMeasure](performancemeasure); } ``` ``` class Performance extends EventTarget { constructor(); readonly timeOrigin: number; clearMarks(markName?: string): void; clearMeasures(measureName?: string): void; getEntries(): [PerformanceEntryList](performanceentrylist); getEntriesByName(name: string, type?: string): [PerformanceEntryList](performanceentrylist); getEntriesByType(type: string): [PerformanceEntryList](performanceentrylist); mark(markName: string, options?: [PerformanceMarkOptions](performancemarkoptions)): [PerformanceMark](performancemark); measure(measureName: string, options?: [PerformanceMeasureOptions](performancemeasureoptions)): [PerformanceMeasure](performancemeasure); measure( measureName: string, startMark?: string, endMark?: string,): [PerformanceMeasure](performancemeasure); now(): number; toJSON(): any; } ``` Methods ------- > `mark(markName: string, options?: [PerformanceMarkOptions](performancemarkoptions)): [PerformanceMark](performancemark)` Stores a timestamp with the associated name (a "mark"). > `measure(measureName: string, options?: [PerformanceMeasureOptions](performancemeasureoptions)): [PerformanceMeasure](performancemeasure)` Stores the `DOMHighResTimeStamp` duration between two marks along with the associated name (a "measure"). Extends ------- > `EventTarget` Constructors ------------ > `new Performance()` Properties ---------- > `timeOrigin: number` Returns a timestamp representing the start of the performance measurement. Methods ------- > `clearMarks(markName?: string): void` Removes the stored timestamp with the associated name. > `clearMeasures(measureName?: string): void` Removes stored timestamp with the associated name. > `getEntries(): [PerformanceEntryList](performanceentrylist)` > `getEntriesByName(name: string, type?: string): [PerformanceEntryList](performanceentrylist)` > `getEntriesByType(type: string): [PerformanceEntryList](performanceentrylist)` > `mark(markName: string, options?: [PerformanceMarkOptions](performancemarkoptions)): [PerformanceMark](performancemark)` Stores a timestamp with the associated name (a "mark"). > `measure(measureName: string, options?: [PerformanceMeasureOptions](performancemeasureoptions)): [PerformanceMeasure](performancemeasure)` Stores the `DOMHighResTimeStamp` duration between two marks along with the associated name (a "measure"). > `measure(measureName: string, startMark?: string, endMark?: string): [PerformanceMeasure](performancemeasure)` Stores the `DOMHighResTimeStamp` duration between two marks along with the associated name (a "measure"). > `now(): number` Returns a current time from Deno's start in milliseconds. Use the permission flag `--allow-hrtime` return a precise value. ``` const t = performance.now(); console.log(`${t} ms since start!`); ``` > `toJSON(): any` Returns a JSON representation of the performance object. deno Headers Headers ======= This Fetch API interface allows you to perform various actions on HTTP request and response headers. These actions include retrieving, setting, adding to, and removing. A Headers object has an associated header list, which is initially empty and consists of zero or more name and value pairs. You can add to this using methods like append() (see Examples). In all methods of this interface, header names are matched by case-insensitive byte sequence. ``` interface Headers {append(name: string, value: string): void; delete(name: string): void; forEach(callbackfn: ( value: string, key: string, parent: [Headers](headers),) => void, thisArg?: any): void; get(name: string): string | null; has(name: string): boolean; set(name: string, value: string): void; } ``` ``` class Headers implements [DomIterable](domiterable)<string, string> { constructor(init?: [HeadersInit](headersinit)); append(name: string, value: string): void; delete(name: string): void; entries(): IterableIterator<[string, string]>; forEach(callbackfn: ( value: string, key: string, parent: this,) => void, thisArg?: any): void; get(name: string): string | null; has(name: string): boolean; keys(): IterableIterator<string>; set(name: string, value: string): void; values(): IterableIterator<string>; [Symbol.iterator](): IterableIterator<[string, string]>; } ``` Methods ------- > `append(name: string, value: string): void` > `delete(name: string): void` > `forEach(callbackfn: ( > value: string, > > key: string, > > parent: [Headers](headers),) => void, thisArg?: any): void` > `get(name: string): string | null` > `has(name: string): boolean` > `set(name: string, value: string): void` Implements ---------- > `[DomIterable](domiterable)<string, string>` Constructors ------------ > `new Headers(init?: [HeadersInit](headersinit))` Methods ------- > `append(name: string, value: string): void` Appends a new value onto an existing header inside a `Headers` object, or adds the header if it does not already exist. > `delete(name: string): void` Deletes a header from a `Headers` object. > `entries(): IterableIterator<[string, string]>` Returns an iterator allowing to go through all key/value pairs contained in this Headers object. The both the key and value of each pairs are ByteString objects. > `forEach(callbackfn: (value: string, key: string, parent: this) => void, thisArg?: any): void` > `get(name: string): string | null` Returns a `ByteString` sequence of all the values of a header within a `Headers` object with a given name. > `has(name: string): boolean` Returns a boolean stating whether a `Headers` object contains a certain header. > `keys(): IterableIterator<string>` Returns an iterator allowing to go through all keys contained in this Headers object. The keys are ByteString objects. > `set(name: string, value: string): void` Sets a new value for an existing header inside a Headers object, or adds the header if it does not already exist. > `values(): IterableIterator<string>` Returns an iterator allowing to go through all values contained in this Headers object. The values are ByteString objects. > `[Symbol.iterator](): IterableIterator<[string, string]>` The Symbol.iterator well-known symbol specifies the default iterator for this Headers object
programming_docs
deno BlobPropertyBag BlobPropertyBag =============== ``` interface BlobPropertyBag {endings?: "transparent" | "native"; type?: string; } ``` Properties ---------- > `endings?: "transparent" | "native"` > `type?: string` deno GPUBlendFactor GPUBlendFactor ============== ``` type GPUBlendFactor = | "zero" | "one" | "src" | "one-minus-src" | "src-alpha" | "one-minus-src-alpha" | "dst" | "one-minus-dst" | "dst-alpha" | "one-minus-dst-alpha" | "src-alpha-saturated" | "constant" | "one-minus-constant"; ``` Type ---- > `"zero" | "one" | "src" | "one-minus-src" | "src-alpha" | "one-minus-src-alpha" | "dst" | "one-minus-dst" | "dst-alpha" | "one-minus-dst-alpha" | "src-alpha-saturated" | "constant" | "one-minus-constant"` deno Deno.truncate Deno.truncate ============= Truncates (or extends) the specified file, to reach the specified `len`. If `len` is not specified then the entire file contents are truncated. ### Truncate the entire file ``` await Deno.truncate("my\_file.txt"); ``` ### Truncate part of the file ``` const file = await Deno.makeTempFile(); await Deno.writeFile(file, new TextEncoder().encode("Hello World")); await Deno.truncate(file, 7); const data = await Deno.readFile(file); console.log(new TextDecoder().decode(data)); // "Hello W" ``` Requires `allow-write` permission. ``` function truncate(name: string, len?: number): Promise<void>; ``` > `truncate(name: string, len?: number): Promise<void>` ### Parameters > `name: string` > `len?: number optional` ### Return Type > `Promise<void>` deno GPUDeviceLostReason GPUDeviceLostReason =================== ``` type GPUDeviceLostReason = "destroyed"; ``` Type ---- > `"destroyed"` deno Deno.Signal Deno.Signal =========== Operating signals which can be listened for or sent to sub-processes. What signals and what their standard behaviors are are OS dependent. ``` type Signal = | "SIGABRT" | "SIGALRM" | "SIGBREAK" | "SIGBUS" | "SIGCHLD" | "SIGCONT" | "SIGEMT" | "SIGFPE" | "SIGHUP" | "SIGILL" | "SIGINFO" | "SIGINT" | "SIGIO" | "SIGKILL" | "SIGPIPE" | "SIGPROF" | "SIGPWR" | "SIGQUIT" | "SIGSEGV" | "SIGSTKFLT" | "SIGSTOP" | "SIGSYS" | "SIGTERM" | "SIGTRAP" | "SIGTSTP" | "SIGTTIN" | "SIGTTOU" | "SIGURG" | "SIGUSR1" | "SIGUSR2" | "SIGVTALRM" | "SIGWINCH" | "SIGXCPU" | "SIGXFSZ"; ``` Type ---- > `"SIGABRT" | "SIGALRM" | "SIGBREAK" | "SIGBUS" | "SIGCHLD" | "SIGCONT" | "SIGEMT" | "SIGFPE" | "SIGHUP" | "SIGILL" | "SIGINFO" | "SIGINT" | "SIGIO" | "SIGKILL" | "SIGPIPE" | "SIGPROF" | "SIGPWR" | "SIGQUIT" | "SIGSEGV" | "SIGSTKFLT" | "SIGSTOP" | "SIGSYS" | "SIGTERM" | "SIGTRAP" | "SIGTSTP" | "SIGTTIN" | "SIGTTOU" | "SIGURG" | "SIGUSR1" | "SIGUSR2" | "SIGVTALRM" | "SIGWINCH" | "SIGXCPU" | "SIGXFSZ"` deno GPURenderPassEncoder GPURenderPassEncoder ==================== ``` class GPURenderPassEncoder implements [GPUObjectBase](gpuobjectbase), [GPUProgrammablePassEncoder](gpuprogrammablepassencoder), [GPURenderEncoderBase](gpurenderencoderbase) { label: string; beginOcclusionQuery(queryIndex: number): undefined; beginPipelineStatisticsQuery(querySet: [GPUQuerySet](gpuqueryset), queryIndex: number): undefined; draw( vertexCount: number, instanceCount?: number, firstVertex?: number, firstInstance?: number,): undefined; drawIndexed( indexCount: number, instanceCount?: number, firstIndex?: number, baseVertex?: number, firstInstance?: number,): undefined; drawIndexedIndirect(indirectBuffer: [GPUBuffer](gpubuffer), indirectOffset: number): undefined; drawIndirect(indirectBuffer: [GPUBuffer](gpubuffer), indirectOffset: number): undefined; end(): undefined; endOcclusionQuery(): undefined; endPipelineStatisticsQuery(): undefined; executeBundles(bundles: [GPURenderBundle](gpurenderbundle)[]): undefined; insertDebugMarker(markerLabel: string): undefined; popDebugGroup(): undefined; pushDebugGroup(groupLabel: string): undefined; setBindGroup( index: number, bindGroup: [GPUBindGroup](gpubindgroup), dynamicOffsets?: number[],): undefined; setBindGroup( index: number, bindGroup: [GPUBindGroup](gpubindgroup), dynamicOffsetsData: Uint32Array, dynamicOffsetsDataStart: number, dynamicOffsetsDataLength: number,): undefined; setBlendConstant(color: [GPUColor](gpucolor)): undefined; setIndexBuffer( buffer: [GPUBuffer](gpubuffer), indexFormat: [GPUIndexFormat](gpuindexformat), offset?: number, size?: number,): undefined; setPipeline(pipeline: [GPURenderPipeline](gpurenderpipeline)): undefined; setScissorRect( x: number, y: number, width: number, height: number,): undefined; setStencilReference(reference: number): undefined; setVertexBuffer( slot: number, buffer: [GPUBuffer](gpubuffer), offset?: number, size?: number,): undefined; setViewport( x: number, y: number, width: number, height: number, minDepth: number, maxDepth: number,): undefined; writeTimestamp(querySet: [GPUQuerySet](gpuqueryset), queryIndex: number): undefined; } ``` Implements ---------- > `[GPUObjectBase](gpuobjectbase)` > `[GPUProgrammablePassEncoder](gpuprogrammablepassencoder)` > `[GPURenderEncoderBase](gpurenderencoderbase)` Properties ---------- > `label: string` Methods ------- > `beginOcclusionQuery(queryIndex: number): undefined` > `beginPipelineStatisticsQuery(querySet: [GPUQuerySet](gpuqueryset), queryIndex: number): undefined` > `draw(vertexCount: number, instanceCount?: number, firstVertex?: number, firstInstance?: number): undefined` > `drawIndexed(indexCount: number, instanceCount?: number, firstIndex?: number, baseVertex?: number, firstInstance?: number): undefined` > `drawIndexedIndirect(indirectBuffer: [GPUBuffer](gpubuffer), indirectOffset: number): undefined` > `drawIndirect(indirectBuffer: [GPUBuffer](gpubuffer), indirectOffset: number): undefined` > `end(): undefined` > `endOcclusionQuery(): undefined` > `endPipelineStatisticsQuery(): undefined` > `executeBundles(bundles: [GPURenderBundle](gpurenderbundle)[]): undefined` > `insertDebugMarker(markerLabel: string): undefined` > `popDebugGroup(): undefined` > `pushDebugGroup(groupLabel: string): undefined` > `setBindGroup(index: number, bindGroup: [GPUBindGroup](gpubindgroup), dynamicOffsets?: number[]): undefined` > `setBindGroup(index: number, bindGroup: [GPUBindGroup](gpubindgroup), dynamicOffsetsData: Uint32Array, dynamicOffsetsDataStart: number, dynamicOffsetsDataLength: number): undefined` > `setBlendConstant(color: [GPUColor](gpucolor)): undefined` > `setIndexBuffer(buffer: [GPUBuffer](gpubuffer), indexFormat: [GPUIndexFormat](gpuindexformat), offset?: number, size?: number): undefined` > `setPipeline(pipeline: [GPURenderPipeline](gpurenderpipeline)): undefined` > `setScissorRect(x: number, y: number, width: number, height: number): undefined` > `setStencilReference(reference: number): undefined` > `setVertexBuffer(slot: number, buffer: [GPUBuffer](gpubuffer), offset?: number, size?: number): undefined` > `setViewport(x: number, y: number, width: number, height: number, minDepth: number, maxDepth: number): undefined` > `writeTimestamp(querySet: [GPUQuerySet](gpuqueryset), queryIndex: number): undefined` deno Deno.watchFs Deno.watchFs ============ Watch for file system events against one or more `paths`, which can be files or directories. These paths must exist already. One user action (e.g. `touch test.file`) can generate multiple file system events. Likewise, one user action can result in multiple file paths in one event (e.g. `mv old_name.txt new_name.txt`). The recursive option is `true` by default and, for directories, will watch the specified directory and all sub directories. Note that the exact ordering of the events can vary between operating systems. ``` const watcher = Deno.watchFs("/"); for await (const event of watcher) { console.log(">>>> event", event); // { kind: "create", paths: [ "/foo.txt" ] } } ``` Call `watcher.close()` to stop watching. ``` const watcher = Deno.watchFs("/"); setTimeout(() => { watcher.close(); }, 5000); for await (const event of watcher) { console.log(">>>> event", event); } ``` Requires `allow-read` permission. ``` function watchFs(paths: string | string[], options?: {recursive: boolean; }): [FsWatcher](deno.fswatcher); ``` > `watchFs(paths: string | string[], options?: {recursive: boolean; }): [FsWatcher](deno.fswatcher)` ### Parameters > `paths: string | string[]` > `options?: {recursive: boolean; } optional` ### Return Type > `[FsWatcher](deno.fswatcher)` deno Deno.isatty Deno.isatty =========== Check if a given resource id (`rid`) is a TTY (a terminal). ``` // This example is system and context specific const nonTTYRid = Deno.openSync("my\_file.txt").rid; const ttyRid = Deno.openSync("/dev/tty6").rid; console.log(Deno.isatty(nonTTYRid)); // false console.log(Deno.isatty(ttyRid)); // true Deno.close(nonTTYRid); Deno.close(ttyRid); ``` ``` function isatty(rid: number): boolean; ``` > `isatty(rid: number): boolean` ### Parameters > `rid: number` ### Return Type > `boolean` deno ByteLengthQueuingStrategy ByteLengthQueuingStrategy ========================= ``` interface ByteLengthQueuingStrategy extends [QueuingStrategy](queuingstrategy)<ArrayBufferView> {highWaterMark: number; size(chunk: ArrayBufferView): number; } ``` ``` var ByteLengthQueuingStrategy: {prototype: [ByteLengthQueuingStrategy](bytelengthqueuingstrategy); new (options: {highWaterMark: number; }): [ByteLengthQueuingStrategy](bytelengthqueuingstrategy); }; ``` Extends ------- > `[QueuingStrategy](queuingstrategy)<ArrayBufferView>` Properties ---------- > `highWaterMark: number` Methods ------- > `size(chunk: ArrayBufferView): number` deno Deno.metrics Deno.metrics ============ Receive metrics from the privileged side of Deno. This is primarily used in the development of Deno. *Ops*, also called *bindings*, are the go-between between Deno JavaScript sandbox and the rest of Deno. ``` > console.table(Deno.metrics()) ┌─────────────────────────┬────────┐ │ (index) │ Values │ ├─────────────────────────┼────────┤ │ opsDispatched │ 3 │ │ opsDispatchedSync │ 2 │ │ opsDispatchedAsync │ 1 │ │ opsDispatchedAsyncUnref │ 0 │ │ opsCompleted │ 3 │ │ opsCompletedSync │ 2 │ │ opsCompletedAsync │ 1 │ │ opsCompletedAsyncUnref │ 0 │ │ bytesSentControl │ 73 │ │ bytesSentData │ 0 │ │ bytesReceived │ 375 │ └─────────────────────────┴────────┘ ``` ``` function metrics(): [Metrics](deno.metrics); ``` > `metrics(): [Metrics](deno.metrics)` ### Return Type > `[Metrics](deno.metrics)` deno GPUPipelineStatisticName GPUPipelineStatisticName ======================== ``` type GPUPipelineStatisticName = | "vertex-shader-invocations" | "clipper-invocations" | "clipper-primitives-out" | "fragment-shader-invocations" | "compute-shader-invocations"; ``` Type ---- > `"vertex-shader-invocations" | "clipper-invocations" | "clipper-primitives-out" | "fragment-shader-invocations" | "compute-shader-invocations"` deno Deno.Permissions Deno.Permissions ================ ``` class Permissions { query(desc: [PermissionDescriptor](deno.permissiondescriptor)): Promise<[PermissionStatus](deno.permissionstatus)>; request(desc: [PermissionDescriptor](deno.permissiondescriptor)): Promise<[PermissionStatus](deno.permissionstatus)>; revoke(desc: [PermissionDescriptor](deno.permissiondescriptor)): Promise<[PermissionStatus](deno.permissionstatus)>; } ``` Methods ------- > `query(desc: [PermissionDescriptor](deno.permissiondescriptor)): Promise<[PermissionStatus](deno.permissionstatus)>` Resolves to the current status of a permission. ``` const status = await Deno.permissions.query({ name: "read", path: "/etc" }); console.log(status.state); ``` > `request(desc: [PermissionDescriptor](deno.permissiondescriptor)): Promise<[PermissionStatus](deno.permissionstatus)>` Requests the permission, and resolves to the state of the permission. ``` const status = await Deno.permissions.request({ name: "env" }); if (status.state === "granted") { console.log("'env' permission is granted."); } else { console.log("'env' permission is denied."); } ``` > `revoke(desc: [PermissionDescriptor](deno.permissiondescriptor)): Promise<[PermissionStatus](deno.permissionstatus)>` Revokes a permission, and resolves to the state of the permission. ``` import { assert } from "https://deno.land/std/testing/asserts.ts"; const status = await Deno.permissions.revoke({ name: "run" }); assert(status.state !== "granted") ``` deno GPUCommandBuffer GPUCommandBuffer ================ ``` class GPUCommandBuffer implements [GPUObjectBase](gpuobjectbase) { label: string; } ``` Implements ---------- > `[GPUObjectBase](gpuobjectbase)` Properties ---------- > `label: string` deno GPUTextureFormat GPUTextureFormat ================ ``` type GPUTextureFormat = | "r8unorm" | "r8snorm" | "r8uint" | "r8sint" | "r16uint" | "r16sint" | "r16float" | "rg8unorm" | "rg8snorm" | "rg8uint" | "rg8sint" | "r32uint" | "r32sint" | "r32float" | "rg16uint" | "rg16sint" | "rg16float" | "rgba8unorm" | "rgba8unorm-srgb" | "rgba8snorm" | "rgba8uint" | "rgba8sint" | "bgra8unorm" | "bgra8unorm-srgb" | "rgb9e5ufloat" | "rgb10a2unorm" | "rg11b10ufloat" | "rg32uint" | "rg32sint" | "rg32float" | "rgba16uint" | "rgba16sint" | "rgba16float" | "rgba32uint" | "rgba32sint" | "rgba32float" | "stencil8" | "depth16unorm" | "depth24plus" | "depth24plus-stencil8" | "depth32float" | "depth24unorm-stencil8" | "depth32float-stencil8" | "bc1-rgba-unorm" | "bc1-rgba-unorm-srgb" | "bc2-rgba-unorm" | "bc2-rgba-unorm-srgb" | "bc3-rgba-unorm" | "bc3-rgba-unorm-srgb" | "bc4-r-unorm" | "bc4-r-snorm" | "bc5-rg-unorm" | "bc5-rg-snorm" | "bc6h-rgb-ufloat" | "bc6h-rgb-float" | "bc7-rgba-unorm" | "bc7-rgba-unorm-srgb" | "etc2-rgb8unorm" | "etc2-rgb8unorm-srgb" | "etc2-rgb8a1unorm" | "etc2-rgb8a1unorm-srgb" | "etc2-rgba8unorm" | "etc2-rgba8unorm-srgb" | "eac-r11unorm" | "eac-r11snorm" | "eac-rg11unorm" | "eac-rg11snorm" | "astc-4x4-unorm" | "astc-4x4-unorm-srgb" | "astc-5x4-unorm" | "astc-5x4-unorm-srgb" | "astc-5x5-unorm" | "astc-5x5-unorm-srgb" | "astc-6x5-unorm" | "astc-6x5-unorm-srgb" | "astc-6x6-unorm" | "astc-6x6-unorm-srgb" | "astc-8x5-unorm" | "astc-8x5-unorm-srgb" | "astc-8x6-unorm" | "astc-8x6-unorm-srgb" | "astc-8x8-unorm" | "astc-8x8-unorm-srgb" | "astc-10x5-unorm" | "astc-10x5-unorm-srgb" | "astc-10x6-unorm" | "astc-10x6-unorm-srgb" | "astc-10x8-unorm" | "astc-10x8-unorm-srgb" | "astc-10x10-unorm" | "astc-10x10-unorm-srgb" | "astc-12x10-unorm" | "astc-12x10-unorm-srgb" | "astc-12x12-unorm" | "astc-12x12-unorm-srgb"; ``` Type ---- > `"r8unorm" | "r8snorm" | "r8uint" | "r8sint" | "r16uint" | "r16sint" | "r16float" | "rg8unorm" | "rg8snorm" | "rg8uint" | "rg8sint" | "r32uint" | "r32sint" | "r32float" | "rg16uint" | "rg16sint" | "rg16float" | "rgba8unorm" | "rgba8unorm-srgb" | "rgba8snorm" | "rgba8uint" | "rgba8sint" | "bgra8unorm" | "bgra8unorm-srgb" | "rgb9e5ufloat" | "rgb10a2unorm" | "rg11b10ufloat" | "rg32uint" | "rg32sint" | "rg32float" | "rgba16uint" | "rgba16sint" | "rgba16float" | "rgba32uint" | "rgba32sint" | "rgba32float" | "stencil8" | "depth16unorm" | "depth24plus" | "depth24plus-stencil8" | "depth32float" | "depth24unorm-stencil8" | "depth32float-stencil8" | "bc1-rgba-unorm" | "bc1-rgba-unorm-srgb" | "bc2-rgba-unorm" | "bc2-rgba-unorm-srgb" | "bc3-rgba-unorm" | "bc3-rgba-unorm-srgb" | "bc4-r-unorm" | "bc4-r-snorm" | "bc5-rg-unorm" | "bc5-rg-snorm" | "bc6h-rgb-ufloat" | "bc6h-rgb-float" | "bc7-rgba-unorm" | "bc7-rgba-unorm-srgb" | "etc2-rgb8unorm" | "etc2-rgb8unorm-srgb" | "etc2-rgb8a1unorm" | "etc2-rgb8a1unorm-srgb" | "etc2-rgba8unorm" | "etc2-rgba8unorm-srgb" | "eac-r11unorm" | "eac-r11snorm" | "eac-rg11unorm" | "eac-rg11snorm" | "astc-4x4-unorm" | "astc-4x4-unorm-srgb" | "astc-5x4-unorm" | "astc-5x4-unorm-srgb" | "astc-5x5-unorm" | "astc-5x5-unorm-srgb" | "astc-6x5-unorm" | "astc-6x5-unorm-srgb" | "astc-6x6-unorm" | "astc-6x6-unorm-srgb" | "astc-8x5-unorm" | "astc-8x5-unorm-srgb" | "astc-8x6-unorm" | "astc-8x6-unorm-srgb" | "astc-8x8-unorm" | "astc-8x8-unorm-srgb" | "astc-10x5-unorm" | "astc-10x5-unorm-srgb" | "astc-10x6-unorm" | "astc-10x6-unorm-srgb" | "astc-10x8-unorm" | "astc-10x8-unorm-srgb" | "astc-10x10-unorm" | "astc-10x10-unorm-srgb" | "astc-12x10-unorm" | "astc-12x10-unorm-srgb" | "astc-12x12-unorm" | "astc-12x12-unorm-srgb"` deno Deno.UnixAddr Deno.UnixAddr ============= ``` interface UnixAddr {path: string; transport: "unix" | "unixpacket"; } ``` Properties ---------- > `path: string` > `transport: "unix" | "unixpacket"` deno FileReader FileReader ========== Lets web applications asynchronously read the contents of files (or raw data buffers) stored on the user's computer, using File or Blob objects to specify the file or data to read. ``` interface FileReader extends [EventTarget](eventtarget) { readonly DONE: number; readonly EMPTY: number; readonly error: [DOMException](domexception) | null; readonly LOADING: number; onabort: ((this: [FileReader](filereader), ev: [ProgressEvent](progressevent)<[FileReader](filereader)>) => any) | null; onerror: ((this: [FileReader](filereader), ev: [ProgressEvent](progressevent)<[FileReader](filereader)>) => any) | null; onload: ((this: [FileReader](filereader), ev: [ProgressEvent](progressevent)<[FileReader](filereader)>) => any) | null; onloadend: ((this: [FileReader](filereader), ev: [ProgressEvent](progressevent)<[FileReader](filereader)>) => any) | null; onloadstart: ((this: [FileReader](filereader), ev: [ProgressEvent](progressevent)<[FileReader](filereader)>) => any) | null; onprogress: ((this: [FileReader](filereader), ev: [ProgressEvent](progressevent)<[FileReader](filereader)>) => any) | null; readonly readyState: number; readonly result: string | ArrayBuffer | null; abort(): void; addEventListener<K extends keyof [FileReaderEventMap](filereadereventmap)>( type: K, listener: (this: [FileReader](filereader), ev: [FileReaderEventMap](filereadereventmap)[K]) => any, options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void; addEventListener( type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void; readAsArrayBuffer(blob: [Blob](blob)): void; readAsBinaryString(blob: [Blob](blob)): void; readAsDataURL(blob: [Blob](blob)): void; readAsText(blob: [Blob](blob), encoding?: string): void; removeEventListener<K extends keyof [FileReaderEventMap](filereadereventmap)>( type: K, listener: (this: [FileReader](filereader), ev: [FileReaderEventMap](filereadereventmap)[K]) => any, options?: boolean | [EventListenerOptions](eventlisteneroptions),): void; removeEventListener( type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [EventListenerOptions](eventlisteneroptions),): void; } ``` ``` var FileReader: {prototype: [FileReader](filereader); readonly DONE: number; readonly EMPTY: number; readonly LOADING: number; new (): [FileReader](filereader); }; ``` Extends ------- > `[EventTarget](eventtarget)` Properties ---------- > `readonly DONE: number` > `readonly EMPTY: number` > `readonly error: [DOMException](domexception) | null` > `readonly LOADING: number` > `onabort: ((this: [FileReader](filereader), ev: [ProgressEvent](progressevent)<[FileReader](filereader)>) => any) | null` > `onerror: ((this: [FileReader](filereader), ev: [ProgressEvent](progressevent)<[FileReader](filereader)>) => any) | null` > `onload: ((this: [FileReader](filereader), ev: [ProgressEvent](progressevent)<[FileReader](filereader)>) => any) | null` > `onloadend: ((this: [FileReader](filereader), ev: [ProgressEvent](progressevent)<[FileReader](filereader)>) => any) | null` > `onloadstart: ((this: [FileReader](filereader), ev: [ProgressEvent](progressevent)<[FileReader](filereader)>) => any) | null` > `onprogress: ((this: [FileReader](filereader), ev: [ProgressEvent](progressevent)<[FileReader](filereader)>) => any) | null` > `readonly readyState: number` > `readonly result: string | ArrayBuffer | null` Methods ------- > `abort(): void` > `addEventListener<K extends keyof [FileReaderEventMap](filereadereventmap)>( > type: K, > > listener: (this: [FileReader](filereader), ev: [FileReaderEventMap](filereadereventmap)[K]) => any, > > options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void` > `addEventListener( > type: string, > > listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), > > options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void` > `readAsArrayBuffer(blob: [Blob](blob)): void` > `readAsBinaryString(blob: [Blob](blob)): void` > `readAsDataURL(blob: [Blob](blob)): void` > `readAsText(blob: [Blob](blob), encoding?: string): void` > `removeEventListener<K extends keyof [FileReaderEventMap](filereadereventmap)>( > type: K, > > listener: (this: [FileReader](filereader), ev: [FileReaderEventMap](filereadereventmap)[K]) => any, > > options?: boolean | [EventListenerOptions](eventlisteneroptions),): void` > `removeEventListener( > type: string, > > listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), > > options?: boolean | [EventListenerOptions](eventlisteneroptions),): void`
programming_docs
deno Navigator Navigator ========= ``` class Navigator { constructor(); readonly gpu: [GPU](gpu); readonly hardwareConcurrency: number; readonly userAgent: string; } ``` Constructors ------------ > `new Navigator()` Properties ---------- > `gpu: [GPU](gpu)` > `hardwareConcurrency: number` > `userAgent: string` deno GPUMapMode GPUMapMode ========== ``` class GPUMapMode { static READ: 1; static WRITE: 2; } ``` Static Properties ----------------- > `READ: 1` > `WRITE: 2` deno Deno.readTextFile Deno.readTextFile ================= Asynchronously reads and returns the entire contents of a file as an UTF-8 decoded string. Reading a directory throws an error. ``` const data = await Deno.readTextFile("hello.txt"); console.log(data); ``` Requires `allow-read` permission. ``` function readTextFile(path: string | [URL](url), options?: [ReadFileOptions](deno.readfileoptions)): Promise<string>; ``` > `readTextFile(path: string | [URL](url), options?: [ReadFileOptions](deno.readfileoptions)): Promise<string>` ### Parameters > `path: string | [URL](url)` > `options?: [ReadFileOptions](deno.readfileoptions) optional` ### Return Type > `Promise<string>` deno Deno.FsEventFlag Deno.FsEventFlag ================ Additional information for FsEvent objects with the "other" kind. * `"rescan"`: rescan notices indicate either a lapse in the events or a change in the filesystem such that events received so far can no longer be relied on to represent the state of the filesystem now. An application that simply reacts to file changes may not care about this. An application that keeps an in-memory representation of the filesystem will need to care, and will need to refresh that representation directly from the filesystem. ``` type FsEventFlag = "rescan"; ``` Type ---- > `"rescan"` deno Deno.TlsConn Deno.TlsConn ============ ``` interface TlsConn extends [Conn](deno.conn) {handshake(): Promise<[TlsHandshakeInfo](deno.tlshandshakeinfo)>; } ``` Extends ------- > `[Conn](deno.conn)` Methods ------- > `handshake(): Promise<[TlsHandshakeInfo](deno.tlshandshakeinfo)>` Runs the client or server handshake protocol to completion if that has not happened yet. Calling this method is optional; the TLS handshake will be completed automatically as soon as data is sent or received. deno Deno.TestDefinition Deno.TestDefinition =================== ``` interface TestDefinition {fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>; ignore?: boolean; name: string; only?: boolean; permissions?: [PermissionOptions](deno.permissionoptions); sanitizeExit?: boolean; sanitizeOps?: boolean; sanitizeResources?: boolean; } ``` Properties ---------- > `fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>` > `ignore?: boolean` If truthy the current test step will be ignored. It is a quick way to skip over a step, but also can be used for conditional logic, like determining if an environment feature is present. > `name: string` The name of the test. > `only?: boolean` If at least one test has `only` set to `true`, only run tests that have `only` set to `true` and fail the test suite. > `permissions?: [PermissionOptions](deno.permissionoptions)` Specifies the permissions that should be used to run the test. Set this to "inherit" to keep the calling runtime permissions, set this to "none" to revoke all permissions, or set a more specific set of permissions using a [`PermissionOptionsObject`](deno.permissionoptionsobject). Defaults to `"inherit"`. > `sanitizeExit?: boolean` Ensure the test case does not prematurely cause the process to exit, for example via a call to [`Deno.exit`](deno#exit). Defaults to `true`. > `sanitizeOps?: boolean` Check that the number of async completed operations after the test step is the same as number of dispatched operations. This ensures that the code tested does not start async operations which it then does not await. This helps in preventing logic errors and memory leaks in the application code. Defaults to `true`. > `sanitizeResources?: boolean` Ensure the test step does not "leak" resources - like open files or network connections - by ensuring the open resources at the start of the test match the open resources at the end of the test. Defaults to `true`. deno WebAssembly.Global WebAssembly.Global ================== A `WebAssembly.Global` object represents a global variable instance, accessible from both JavaScript and importable/exportable across one or more `WebAssembly.Module` instances. This allows dynamic linking of multiple modules. [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WebAssembly/Global) ``` class Global { constructor(descriptor: [GlobalDescriptor](webassembly.globaldescriptor), v?: any); value: any; valueOf(): any; } ``` Constructors ------------ > `new Global(descriptor: [GlobalDescriptor](webassembly.globaldescriptor), v?: any)` Creates a new `Global` object. Properties ---------- > `value: any` The value contained inside the global variable — this can be used to directly set and get the global's value. Methods ------- > `valueOf(): any` Old-style method that returns the value contained inside the global variable. deno Deno.lstat Deno.lstat ========== Resolves to a [`Deno.FileInfo`](deno#FileInfo) for the specified `path`. If `path` is a symlink, information for the symlink will be returned instead of what it points to. ``` import { assert } from "https://deno.land/std/testing/asserts.ts"; const fileInfo = await Deno.lstat("hello.txt"); assert(fileInfo.isFile); ``` Requires `allow-read` permission. ``` function lstat(path: string | [URL](url)): Promise<[FileInfo](deno.fileinfo)>; ``` > `lstat(path: string | [URL](url)): Promise<[FileInfo](deno.fileinfo)>` ### Parameters > `path: string | [URL](url)` ### Return Type > `Promise<[FileInfo](deno.fileinfo)>` deno GPUColor GPUColor ======== ``` type GPUColor = number[] | [GPUColorDict](gpucolordict); ``` Type ---- > `number[] | [GPUColorDict](gpucolordict)` deno GPURenderBundleEncoder GPURenderBundleEncoder ====================== ``` class GPURenderBundleEncoder implements [GPUObjectBase](gpuobjectbase), [GPUProgrammablePassEncoder](gpuprogrammablepassencoder), [GPURenderEncoderBase](gpurenderencoderbase) { label: string; draw( vertexCount: number, instanceCount?: number, firstVertex?: number, firstInstance?: number,): undefined; drawIndexed( indexCount: number, instanceCount?: number, firstIndex?: number, baseVertex?: number, firstInstance?: number,): undefined; drawIndexedIndirect(indirectBuffer: [GPUBuffer](gpubuffer), indirectOffset: number): undefined; drawIndirect(indirectBuffer: [GPUBuffer](gpubuffer), indirectOffset: number): undefined; finish(descriptor?: [GPURenderBundleDescriptor](gpurenderbundledescriptor)): [GPURenderBundle](gpurenderbundle); insertDebugMarker(markerLabel: string): undefined; popDebugGroup(): undefined; pushDebugGroup(groupLabel: string): undefined; setBindGroup( index: number, bindGroup: [GPUBindGroup](gpubindgroup), dynamicOffsets?: number[],): undefined; setBindGroup( index: number, bindGroup: [GPUBindGroup](gpubindgroup), dynamicOffsetsData: Uint32Array, dynamicOffsetsDataStart: number, dynamicOffsetsDataLength: number,): undefined; setIndexBuffer( buffer: [GPUBuffer](gpubuffer), indexFormat: [GPUIndexFormat](gpuindexformat), offset?: number, size?: number,): undefined; setPipeline(pipeline: [GPURenderPipeline](gpurenderpipeline)): undefined; setVertexBuffer( slot: number, buffer: [GPUBuffer](gpubuffer), offset?: number, size?: number,): undefined; } ``` Implements ---------- > `[GPUObjectBase](gpuobjectbase)` > `[GPUProgrammablePassEncoder](gpuprogrammablepassencoder)` > `[GPURenderEncoderBase](gpurenderencoderbase)` Properties ---------- > `label: string` Methods ------- > `draw(vertexCount: number, instanceCount?: number, firstVertex?: number, firstInstance?: number): undefined` > `drawIndexed(indexCount: number, instanceCount?: number, firstIndex?: number, baseVertex?: number, firstInstance?: number): undefined` > `drawIndexedIndirect(indirectBuffer: [GPUBuffer](gpubuffer), indirectOffset: number): undefined` > `drawIndirect(indirectBuffer: [GPUBuffer](gpubuffer), indirectOffset: number): undefined` > `finish(descriptor?: [GPURenderBundleDescriptor](gpurenderbundledescriptor)): [GPURenderBundle](gpurenderbundle)` > `insertDebugMarker(markerLabel: string): undefined` > `popDebugGroup(): undefined` > `pushDebugGroup(groupLabel: string): undefined` > `setBindGroup(index: number, bindGroup: [GPUBindGroup](gpubindgroup), dynamicOffsets?: number[]): undefined` > `setBindGroup(index: number, bindGroup: [GPUBindGroup](gpubindgroup), dynamicOffsetsData: Uint32Array, dynamicOffsetsDataStart: number, dynamicOffsetsDataLength: number): undefined` > `setIndexBuffer(buffer: [GPUBuffer](gpubuffer), indexFormat: [GPUIndexFormat](gpuindexformat), offset?: number, size?: number): undefined` > `setPipeline(pipeline: [GPURenderPipeline](gpurenderpipeline)): undefined` > `setVertexBuffer(slot: number, buffer: [GPUBuffer](gpubuffer), offset?: number, size?: number): undefined` deno reportError reportError =========== Dispatch an uncaught exception. Similar to a synchronous version of: ``` setTimeout(() => { throw error; }, 0); ``` The error can not be caught with a `try/catch` block. An error event will be dispatched to the global scope. You can prevent the error from being reported to the console with `Event.prototype.preventDefault()`: ``` addEventListener("error", (event) => { event.preventDefault(); }); reportError(new Error("foo")); // Will not be reported. ``` In Deno, this error will terminate the process if not intercepted like above. ``` function reportError(error: any): void; ``` > `reportError(error: any): void` ### Parameters > `error: any` ### Return Type > `void` deno Deno.TestStepDefinition Deno.TestStepDefinition ======================= ``` interface TestStepDefinition {fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>; ignore?: boolean; name: string; sanitizeExit?: boolean; sanitizeOps?: boolean; sanitizeResources?: boolean; } ``` Properties ---------- > `fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>` The test function that will be tested when this step is executed. The function can take an argument which will provide information about the current step's context. > `ignore?: boolean` If truthy the current test step will be ignored. This is a quick way to skip over a step, but also can be used for conditional logic, like determining if an environment feature is present. > `name: string` The name of the step. > `sanitizeExit?: boolean` Ensure the test step does not prematurely cause the process to exit, for example via a call to [`Deno.exit`](deno#exit). Defaults to the parent test or step's value. > `sanitizeOps?: boolean` Check that the number of async completed operations after the test step is the same as number of dispatched operations. This ensures that the code tested does not start async operations which it then does not await. This helps in preventing logic errors and memory leaks in the application code. Defaults to the parent test or step's value. > `sanitizeResources?: boolean` Ensure the test step does not "leak" resources - like open files or network connections - by ensuring the open resources at the start of the step match the open resources at the end of the step. Defaults to the parent test or step's value. deno WebAssembly.Module WebAssembly.Module ================== A `WebAssembly.Module` object contains stateless WebAssembly code that has already been compiled by the browser — this can be efficiently shared with Workers, and instantiated multiple times. [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WebAssembly/Module) ``` class Module { constructor(bytes: [BufferSource](buffersource)); static customSections(moduleObject: [Module](webassembly.module), sectionName: string): ArrayBuffer[]; static exports(moduleObject: [Module](webassembly.module)): [ModuleExportDescriptor](webassembly.moduleexportdescriptor)[]; static imports(moduleObject: [Module](webassembly.module)): [ModuleImportDescriptor](webassembly.moduleimportdescriptor)[]; } ``` Constructors ------------ > `new Module(bytes: [BufferSource](buffersource))` Creates a new `Module` object. Static Methods -------------- > `customSections(moduleObject: [Module](webassembly.module), sectionName: string): ArrayBuffer[]` Given a `Module` and string, returns a copy of the contents of all custom sections in the module with the given string name. > `exports(moduleObject: [Module](webassembly.module)): [ModuleExportDescriptor](webassembly.moduleexportdescriptor)[]` Given a `Module`, returns an array containing descriptions of all the declared exports. > `imports(moduleObject: [Module](webassembly.module)): [ModuleImportDescriptor](webassembly.moduleimportdescriptor)[]` Given a `Module`, returns an array containing descriptions of all the declared imports. deno Deno.HrtimePermissionDescriptor Deno.HrtimePermissionDescriptor =============================== ``` interface HrtimePermissionDescriptor {name: "hrtime"; } ``` Properties ---------- > `name: "hrtime"` deno HashAlgorithmIdentifier HashAlgorithmIdentifier ======================= ``` type HashAlgorithmIdentifier = [AlgorithmIdentifier](algorithmidentifier); ``` Type ---- > `[AlgorithmIdentifier](algorithmidentifier)` deno Deno.SysPermissionDescriptor Deno.SysPermissionDescriptor ============================ ``` interface SysPermissionDescriptor {kind?: | "loadavg" | "hostname" | "systemMemoryInfo" | "networkInterfaces" | "osRelease" | "getUid" | "getGid"; name: "sys"; } ``` Properties ---------- > `kind?: "loadavg" | "hostname" | "systemMemoryInfo" | "networkInterfaces" | "osRelease" | "getUid" | "getGid"` > `name: "sys"` deno HeadersInit HeadersInit =========== ``` type HeadersInit = [Headers](headers) | string[][] | Record<string, string>; ``` Type ---- > `[Headers](headers) | string[][] | Record<string, string>` deno Deno.shutdown Deno.shutdown ============= Shutdown socket send operations. Matches behavior of POSIX shutdown(3). ``` const listener = Deno.listen({ port: 80 }); const conn = await listener.accept(); Deno.shutdown(conn.rid); ``` ``` function shutdown(rid: number): Promise<void>; ``` > `shutdown(rid: number): Promise<void>` ### Parameters > `rid: number` ### Return Type > `Promise<void>` deno GPUBindingResource GPUBindingResource ================== ``` type GPUBindingResource = [GPUSampler](gpusampler) | [GPUTextureView](gputextureview) | [GPUBufferBinding](gpubufferbinding); ``` Type ---- > `[GPUSampler](gpusampler) | [GPUTextureView](gputextureview) | [GPUBufferBinding](gpubufferbinding)` deno Deno.noColor Deno.noColor ============ Reflects the `NO_COLOR` environment variable at program start. When the value is `true`, the Deno CLI will attempt to not send color codes to `stderr` or `stdout` and other command line programs should also attempt to respect this value. See: <https://no-color.org/> ``` const noColor: boolean; ``` deno setInterval setInterval =========== Repeatedly calls a function , with a fixed time delay between each call. ``` // Outputs 'hello' to the console every 500ms setInterval(() => { console.log('hello'); }, 500); ``` ``` function setInterval( cb: (...args: any[]) => void, delay?: number, ...args: any[],): number; ``` > `setInterval(cb: (...args: any[]) => void, delay?: number, ...args: any[]): number` ### Parameters > `cb: (...args: any[]) => void` > `delay?: number optional` > `...args: any[] optional` ### Return Type > `number` deno Deno.Listener Deno.Listener ============= A generic network listener for stream-oriented protocols. ``` interface Listener extends AsyncIterable<[Conn](deno.conn)> { readonly addr: [Addr](deno.addr); readonly rid: number; [[Symbol.asyncIterator]](): AsyncIterableIterator<[Conn](deno.conn)>; accept(): Promise<[Conn](deno.conn)>; close(): void; } ``` Extends ------- > `AsyncIterable<[Conn](deno.conn)>` Properties ---------- > `readonly addr: [Addr](deno.addr)` Return the address of the `Listener`. > `readonly rid: number` Return the rid of the `Listener`. Methods ------- > `[[Symbol.asyncIterator]](): AsyncIterableIterator<[Conn](deno.conn)>` > `accept(): Promise<[Conn](deno.conn)>` Waits for and resolves to the next connection to the `Listener`. > `close(): void` Close closes the listener. Any pending accept promises will be rejected with errors. deno GPUCompilationMessage GPUCompilationMessage ===================== ``` interface GPUCompilationMessage { readonly lineNum: number; readonly linePos: number; readonly message: string; readonly type: [GPUCompilationMessageType](gpucompilationmessagetype); } ``` Properties ---------- > `readonly lineNum: number` > `readonly linePos: number` > `readonly message: string` > `readonly type: [GPUCompilationMessageType](gpucompilationmessagetype)` deno AesGcmParams AesGcmParams ============ ``` interface AesGcmParams extends [Algorithm](algorithm) {additionalData?: [BufferSource](buffersource); iv: [BufferSource](buffersource); tagLength?: number; } ``` Extends ------- > `[Algorithm](algorithm)` Properties ---------- > `additionalData?: [BufferSource](buffersource)` > `iv: [BufferSource](buffersource)` > `tagLength?: number` deno GPUOrigin3DDict GPUOrigin3DDict =============== ``` interface GPUOrigin3DDict {x?: number; y?: number; z?: number; } ``` Properties ---------- > `x?: number` > `y?: number` > `z?: number` deno GPUComputePassDescriptor GPUComputePassDescriptor ======================== ``` interface GPUComputePassDescriptor extends [GPUObjectDescriptorBase](gpuobjectdescriptorbase) {} ``` Extends ------- > `[GPUObjectDescriptorBase](gpuobjectdescriptorbase)` deno GPUComputePipelineDescriptor GPUComputePipelineDescriptor ============================ ``` interface GPUComputePipelineDescriptor extends [GPUPipelineDescriptorBase](gpupipelinedescriptorbase) {compute: [GPUProgrammableStage](gpuprogrammablestage); } ``` Extends ------- > `[GPUPipelineDescriptorBase](gpupipelinedescriptorbase)` Properties ---------- > `compute: [GPUProgrammableStage](gpuprogrammablestage)` deno TransformStreamDefaultControllerTransformCallback TransformStreamDefaultControllerTransformCallback ================================================= ``` interface TransformStreamDefaultControllerTransformCallback <I, O> {(chunk: I, controller: [TransformStreamDefaultController](transformstreamdefaultcontroller)<O>): void | PromiseLike<void>;} ``` Type Parameters --------------- > `I` > `O` Call Signatures --------------- > `(chunk: I, controller: [TransformStreamDefaultController](transformstreamdefaultcontroller)<O>): void | PromiseLike<void>` deno FormData FormData ======== Provides a way to easily construct a set of key/value pairs representing form fields and their values, which can then be easily sent using the XMLHttpRequest.send() method. It uses the same format a form would use if the encoding type were set to "multipart/form-data". ``` interface FormData {[[Symbol.iterator]](): IterableIterator<[string, [FormDataEntryValue](formdataentryvalue)]>; append( name: string, value: string | [Blob](blob), fileName?: string,): void; delete(name: string): void; entries(): IterableIterator<[string, [FormDataEntryValue](formdataentryvalue)]>; forEach(callback: ( value: [FormDataEntryValue](formdataentryvalue), key: string, parent: this,) => void, thisArg?: any): void; get(name: string): [FormDataEntryValue](formdataentryvalue) | null; getAll(name: string): [FormDataEntryValue](formdataentryvalue)[]; has(name: string): boolean; keys(): IterableIterator<string>; set( name: string, value: string | [Blob](blob), fileName?: string,): void; values(): IterableIterator<string>; } ``` ``` var FormData: {prototype: [FormData](formdata); new (): [FormData](formdata); }; ``` Methods ------- > `[[Symbol.iterator]](): IterableIterator<[string, [FormDataEntryValue](formdataentryvalue)]>` > `append( > name: string, > > value: string | [Blob](blob), > > fileName?: string,): void` > `delete(name: string): void` > `entries(): IterableIterator<[string, [FormDataEntryValue](formdataentryvalue)]>` > `forEach(callback: ( > value: [FormDataEntryValue](formdataentryvalue), > > key: string, > > parent: this,) => void, thisArg?: any): void` > `get(name: string): [FormDataEntryValue](formdataentryvalue) | null` > `getAll(name: string): [FormDataEntryValue](formdataentryvalue)[]` > `has(name: string): boolean` > `keys(): IterableIterator<string>` > `set( > name: string, > > value: string | [Blob](blob), > > fileName?: string,): void` > `values(): IterableIterator<string>`
programming_docs
deno Deno.connect Deno.connect ============ Connects to the hostname (default is "127.0.0.1") and port on the named transport (default is "tcp"), and resolves to the connection (`Conn`). ``` const conn1 = await Deno.connect({ port: 80 }); const conn2 = await Deno.connect({ hostname: "192.0.2.1", port: 80 }); const conn3 = await Deno.connect({ hostname: "[2001:db8::1]", port: 80 }); const conn4 = await Deno.connect({ hostname: "golang.org", port: 80, transport: "tcp" }); ``` Requires `allow-net` permission for "tcp". ``` function connect(options: [ConnectOptions](deno.connectoptions)): Promise<[TcpConn](deno.tcpconn)>; ``` > `connect(options: [ConnectOptions](deno.connectoptions)): Promise<[TcpConn](deno.tcpconn)>` ### Parameters > `options: [ConnectOptions](deno.connectoptions)` ### Return Type > `Promise<[TcpConn](deno.tcpconn)>` deno Deno.version Deno.version ============ Version related information. ``` const version: {deno: string; v8: string; typescript: string; }; ``` deno BroadcastChannel BroadcastChannel ================ ``` interface BroadcastChannel extends [EventTarget](eventtarget) { readonly name: string; onmessage: ((this: [BroadcastChannel](broadcastchannel), ev: [MessageEvent](messageevent)) => any) | null; onmessageerror: ((this: [BroadcastChannel](broadcastchannel), ev: [MessageEvent](messageevent)) => any) | null; addEventListener<K extends keyof [BroadcastChannelEventMap](broadcastchanneleventmap)>( type: K, listener: (this: [BroadcastChannel](broadcastchannel), ev: [BroadcastChannelEventMap](broadcastchanneleventmap)[K]) => any, options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void; addEventListener( type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void; close(): void; postMessage(message: any): void; removeEventListener<K extends keyof [BroadcastChannelEventMap](broadcastchanneleventmap)>( type: K, listener: (this: [BroadcastChannel](broadcastchannel), ev: [BroadcastChannelEventMap](broadcastchanneleventmap)[K]) => any, options?: boolean | [EventListenerOptions](eventlisteneroptions),): void; removeEventListener( type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [EventListenerOptions](eventlisteneroptions),): void; } ``` ``` var BroadcastChannel: {prototype: [BroadcastChannel](broadcastchannel); new (name: string): [BroadcastChannel](broadcastchannel); }; ``` Extends ------- > `[EventTarget](eventtarget)` Properties ---------- > `readonly name: string` Returns the channel name (as passed to the constructor). > `onmessage: ((this: [BroadcastChannel](broadcastchannel), ev: [MessageEvent](messageevent)) => any) | null` > `onmessageerror: ((this: [BroadcastChannel](broadcastchannel), ev: [MessageEvent](messageevent)) => any) | null` Methods ------- > `addEventListener<K extends keyof [BroadcastChannelEventMap](broadcastchanneleventmap)>( > type: K, > > listener: (this: [BroadcastChannel](broadcastchannel), ev: [BroadcastChannelEventMap](broadcastchanneleventmap)[K]) => any, > > options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void` > `addEventListener( > type: string, > > listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), > > options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void` > `close(): void` Closes the BroadcastChannel object, opening it up to garbage collection. > `postMessage(message: any): void` Sends the given message to other BroadcastChannel objects set up for this channel. Messages can be structured objects, e.g. nested objects and arrays. > `removeEventListener<K extends keyof [BroadcastChannelEventMap](broadcastchanneleventmap)>( > type: K, > > listener: (this: [BroadcastChannel](broadcastchannel), ev: [BroadcastChannelEventMap](broadcastchanneleventmap)[K]) => any, > > options?: boolean | [EventListenerOptions](eventlisteneroptions),): void` > `removeEventListener( > type: string, > > listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), > > options?: boolean | [EventListenerOptions](eventlisteneroptions),): void` deno ErrorEvent ErrorEvent ========== ``` class ErrorEvent extends Event { constructor(type: string, eventInitDict?: [ErrorEventInit](erroreventinit)); readonly colno: number; readonly error: any; readonly filename: string; readonly lineno: number; readonly message: string; } ``` Extends ------- > `Event` Constructors ------------ > `new ErrorEvent(type: string, eventInitDict?: [ErrorEventInit](erroreventinit))` Properties ---------- > `colno: number` > `error: any` > `filename: string` > `lineno: number` > `message: string` deno FormDataEntryValue FormDataEntryValue ================== ``` type FormDataEntryValue = [File](file) | string; ``` Type ---- > `[File](file) | string` deno fetch fetch ===== Fetch a resource from the network. It returns a `Promise` that resolves to the `Response` to that `Request`, whether it is successful or not. ``` const response = await fetch("http://my.json.host/data.json"); console.log(response.status); // e.g. 200 console.log(response.statusText); // e.g. "OK" const jsonData = await response.json(); ``` ``` function fetch(input: [URL](url) | [Request](request) | string, init?: [RequestInit](requestinit)): Promise<[Response](response)>; ``` > `fetch(input: [URL](url) | [Request](request) | string, init?: [RequestInit](requestinit)): Promise<[Response](response)>` ### Parameters > `input: [URL](url) | [Request](request) | string` > `init?: [RequestInit](requestinit) optional` ### Return Type > `Promise<[Response](response)>` deno Deno.ReadPermissionDescriptor Deno.ReadPermissionDescriptor ============================= ``` interface ReadPermissionDescriptor {name: "read"; path?: string | [URL](url); } ``` Properties ---------- > `name: "read"` > `path?: string | [URL](url)` deno AesKeyAlgorithm AesKeyAlgorithm =============== ``` interface AesKeyAlgorithm extends [KeyAlgorithm](keyalgorithm) {length: number; } ``` Extends ------- > `[KeyAlgorithm](keyalgorithm)` Properties ---------- > `length: number` deno RsaHashedKeyAlgorithm RsaHashedKeyAlgorithm ===================== ``` interface RsaHashedKeyAlgorithm extends [RsaKeyAlgorithm](rsakeyalgorithm) {hash: [KeyAlgorithm](keyalgorithm); } ``` Extends ------- > `[RsaKeyAlgorithm](rsakeyalgorithm)` Properties ---------- > `hash: [KeyAlgorithm](keyalgorithm)` deno MessageEvent MessageEvent ============ ``` class MessageEvent<T = any> extends Event { constructor(type: string, eventInitDict?: [MessageEventInit](messageeventinit)); readonly data: T; readonly lastEventId: string; readonly ports: ReadonlyArray<[MessagePort](messageport)>; } ``` Type Parameters --------------- > `T = any` Extends ------- > `Event` Constructors ------------ > `new MessageEvent(type: string, eventInitDict?: [MessageEventInit](messageeventinit))` Properties ---------- > `data: T` Returns the data of the message. > `lastEventId: string` Returns the last event ID string, for server-sent events. > `ports: ReadonlyArray<[MessagePort](messageport)>` Returns transferred ports. deno ReadableStreamReader ReadableStreamReader ==================== ``` interface ReadableStreamReader <R = any> {cancel(): Promise<void>; read(): Promise<[ReadableStreamReadResult](readablestreamreadresult)<R>>; releaseLock(): void; } ``` ``` var ReadableStreamReader: {prototype: [ReadableStreamReader](readablestreamreader); new (): [ReadableStreamReader](readablestreamreader); }; ``` Type Parameters --------------- > `R = any` Methods ------- > `cancel(): Promise<void>` > `read(): Promise<[ReadableStreamReadResult](readablestreamreadresult)<R>>` > `releaseLock(): void` deno GPUCompilationMessageType GPUCompilationMessageType ========================= ``` type GPUCompilationMessageType = "error" | "warning" | "info"; ``` Type ---- > `"error" | "warning" | "info"` deno Deno.symlink Deno.symlink ============ Creates `newpath` as a symbolic link to `oldpath`. The options.type parameter can be set to `file` or `dir`. This argument is only available on Windows and ignored on other platforms. ``` await Deno.symlink("old/name", "new/name"); ``` Requires full `allow-read` and `allow-write` permissions. ``` function symlink( oldpath: string | [URL](url), newpath: string | [URL](url), options?: [SymlinkOptions](deno.symlinkoptions),): Promise<void>; ``` > `symlink(oldpath: string | [URL](url), newpath: string | [URL](url), options?: [SymlinkOptions](deno.symlinkoptions)): Promise<void>` ### Parameters > `oldpath: string | [URL](url)` > `newpath: string | [URL](url)` > `options?: [SymlinkOptions](deno.symlinkoptions) optional` ### Return Type > `Promise<void>` deno RequestInfo RequestInfo =========== ``` type RequestInfo = [Request](request) | string; ``` Type ---- > `[Request](request) | string` deno WebAssembly.Imports WebAssembly.Imports =================== ``` type Imports = Record<string, [ModuleImports](webassembly.moduleimports)>; ``` Type ---- > `Record<string, [ModuleImports](webassembly.moduleimports)>` deno Deno.PermissionStatus Deno.PermissionStatus ===================== ``` class PermissionStatus extends EventTarget { onchange: ((this: [PermissionStatus](deno.permissionstatus), ev: [Event](event)) => any) | null; readonly state: [PermissionState](deno.permissionstate); addEventListener<K extends keyof [PermissionStatusEventMap](deno.permissionstatuseventmap)>( type: K, listener: (this: [PermissionStatus](deno.permissionstatus), ev: [PermissionStatusEventMap](deno.permissionstatuseventmap)[K]) => any, options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void; addEventListener( type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void; removeEventListener<K extends keyof [PermissionStatusEventMap](deno.permissionstatuseventmap)>( type: K, listener: (this: [PermissionStatus](deno.permissionstatus), ev: [PermissionStatusEventMap](deno.permissionstatuseventmap)[K]) => any, options?: boolean | [EventListenerOptions](eventlisteneroptions),): void; removeEventListener( type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [EventListenerOptions](eventlisteneroptions),): void; } ``` Extends ------- > `EventTarget` Properties ---------- > `onchange: ((this: [PermissionStatus](deno.permissionstatus), ev: [Event](event)) => any) | null` > `state: [PermissionState](deno.permissionstate)` Methods ------- > `addEventListener<K extends keyof [PermissionStatusEventMap](deno.permissionstatuseventmap)>(type: K, listener: (this: [PermissionStatus](deno.permissionstatus), ev: [PermissionStatusEventMap](deno.permissionstatuseventmap)[K]) => any, options?: boolean | [AddEventListenerOptions](addeventlisteneroptions)): void` > `addEventListener(type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [AddEventListenerOptions](addeventlisteneroptions)): void` > `removeEventListener<K extends keyof [PermissionStatusEventMap](deno.permissionstatuseventmap)>(type: K, listener: (this: [PermissionStatus](deno.permissionstatus), ev: [PermissionStatusEventMap](deno.permissionstatuseventmap)[K]) => any, options?: boolean | [EventListenerOptions](eventlisteneroptions)): void` > `removeEventListener(type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [EventListenerOptions](eventlisteneroptions)): void` deno URLPatternInit URLPatternInit ============== ``` interface URLPatternInit {baseURL?: string; hash?: string; hostname?: string; password?: string; pathname?: string; port?: string; protocol?: string; search?: string; username?: string; } ``` Properties ---------- > `baseURL?: string` > `hash?: string` > `hostname?: string` > `password?: string` > `pathname?: string` > `port?: string` > `protocol?: string` > `search?: string` > `username?: string` deno Worker Worker ====== ``` class Worker extends EventTarget { constructor(specifier: string | [URL](url), options?: [WorkerOptions](workeroptions)); onerror?: (e: [ErrorEvent](errorevent)) => void; onmessage?: (e: [MessageEvent](messageevent)) => void; onmessageerror?: (e: [MessageEvent](messageevent)) => void; addEventListener<K extends keyof [WorkerEventMap](workereventmap)>( type: K, listener: (this: [Worker](worker), ev: [WorkerEventMap](workereventmap)[K]) => any, options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void; addEventListener( type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void; postMessage(message: any, transfer: [Transferable](transferable)[]): void; postMessage(message: any, options?: [StructuredSerializeOptions](structuredserializeoptions)): void; removeEventListener<K extends keyof [WorkerEventMap](workereventmap)>( type: K, listener: (this: [Worker](worker), ev: [WorkerEventMap](workereventmap)[K]) => any, options?: boolean | [EventListenerOptions](eventlisteneroptions),): void; removeEventListener( type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [EventListenerOptions](eventlisteneroptions),): void; terminate(): void; } ``` Extends ------- > `EventTarget` Constructors ------------ > `new Worker(specifier: string | [URL](url), options?: [WorkerOptions](workeroptions))` Properties ---------- > `onerror: (e: [ErrorEvent](errorevent)) => void` > `onmessage: (e: [MessageEvent](messageevent)) => void` > `onmessageerror: (e: [MessageEvent](messageevent)) => void` Methods ------- > `addEventListener<K extends keyof [WorkerEventMap](workereventmap)>(type: K, listener: (this: [Worker](worker), ev: [WorkerEventMap](workereventmap)[K]) => any, options?: boolean | [AddEventListenerOptions](addeventlisteneroptions)): void` > `addEventListener(type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [AddEventListenerOptions](addeventlisteneroptions)): void` > `postMessage(message: any, transfer: [Transferable](transferable)[]): void` > `postMessage(message: any, options?: [StructuredSerializeOptions](structuredserializeoptions)): void` > `removeEventListener<K extends keyof [WorkerEventMap](workereventmap)>(type: K, listener: (this: [Worker](worker), ev: [WorkerEventMap](workereventmap)[K]) => any, options?: boolean | [EventListenerOptions](eventlisteneroptions)): void` > `removeEventListener(type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [EventListenerOptions](eventlisteneroptions)): void` > `terminate(): void` deno Deno.WriteFileOptions Deno.WriteFileOptions ===================== Options for writing to a file. ``` interface WriteFileOptions {append?: boolean; create?: boolean; mode?: number; signal?: [AbortSignal](abortsignal); } ``` Properties ---------- > `append?: boolean` Defaults to `false`. If set to `true`, will append to a file instead of overwriting previous contents. > `create?: boolean` Sets the option to allow creating a new file, if one doesn't already exist at the specified path (defaults to `true`). > `mode?: number` Permissions always applied to file. > `signal?: [AbortSignal](abortsignal)` An abort signal to allow cancellation of the file write operation. If the signal becomes aborted the write file operation will be stopped and the promise returned will be rejected with an {*@link* `AbortError`}. deno GPUFrontFace GPUFrontFace ============ ``` type GPUFrontFace = "ccw" | "cw"; ``` Type ---- > `"ccw" | "cw"` deno URLPatternResult URLPatternResult ================ `URLPatternResult` is the object returned from `URLPattern.exec`. ``` interface URLPatternResult {hash: [URLPatternComponentResult](urlpatterncomponentresult); hostname: [URLPatternComponentResult](urlpatterncomponentresult); inputs: [[URLPatternInit](urlpatterninit)] | [[URLPatternInit](urlpatterninit), string]; password: [URLPatternComponentResult](urlpatterncomponentresult); pathname: [URLPatternComponentResult](urlpatterncomponentresult); port: [URLPatternComponentResult](urlpatterncomponentresult); protocol: [URLPatternComponentResult](urlpatterncomponentresult); search: [URLPatternComponentResult](urlpatterncomponentresult); username: [URLPatternComponentResult](urlpatterncomponentresult); } ``` Properties ---------- > `hash: [URLPatternComponentResult](urlpatterncomponentresult)` The matched result for the `hash` matcher. > `hostname: [URLPatternComponentResult](urlpatterncomponentresult)` The matched result for the `hostname` matcher. > `inputs: [[URLPatternInit](urlpatterninit)] | [[URLPatternInit](urlpatterninit), string]` The inputs provided when matching. > `password: [URLPatternComponentResult](urlpatterncomponentresult)` The matched result for the `password` matcher. > `pathname: [URLPatternComponentResult](urlpatterncomponentresult)` The matched result for the `pathname` matcher. > `port: [URLPatternComponentResult](urlpatterncomponentresult)` The matched result for the `port` matcher. > `protocol: [URLPatternComponentResult](urlpatterncomponentresult)` The matched result for the `protocol` matcher. > `search: [URLPatternComponentResult](urlpatterncomponentresult)` The matched result for the `search` matcher. > `username: [URLPatternComponentResult](urlpatterncomponentresult)` The matched result for the `username` matcher. deno PerformanceMarkOptions PerformanceMarkOptions ====================== Options which are used in conjunction with `performance.mark`. Check out the MDN [`performance.mark()`](https://developer.mozilla.org/en-US/docs/Web/API/Performance/mark#markoptions) documentation for more details. ``` interface PerformanceMarkOptions {detail?: any; detail?: any; startTime?: number; startTime?: number; } ``` Properties ---------- > `detail?: any` Metadata to be included in the mark. > `detail?: any` Metadata to be included in the mark. > `startTime?: number` Timestamp to be used as the mark time. > `startTime?: number` Timestamp to be used as the mark time. deno WebAssembly.WebAssemblyInstantiatedSource WebAssembly.WebAssemblyInstantiatedSource ========================================= The value returned from `WebAssembly.instantiate`. ``` interface WebAssemblyInstantiatedSource {instance: [Instance](webassembly.instance); module: [Module](webassembly.module); } ``` Properties ---------- > `instance: [Instance](webassembly.instance)` > `module: [Module](webassembly.module)` A `WebAssembly.Module` object representing the compiled WebAssembly module. This `Module` can be instantiated again, or shared via postMessage(). deno Deno.SymlinkOptions Deno.SymlinkOptions =================== ``` type SymlinkOptions = {type: "file" | "dir"; }; ``` Type ---- > `{type: "file" | "dir"; }` deno CustomEvent CustomEvent =========== ``` class CustomEvent<T = any> extends Event { constructor(typeArg: string, eventInitDict?: [CustomEventInit](customeventinit)<T>); readonly detail: T; } ``` Type Parameters --------------- > `T = any` Extends ------- > `Event` Constructors ------------ > `new CustomEvent(typeArg: string, eventInitDict?: [CustomEventInit](customeventinit)<T>)` Properties ---------- > `detail: T` Returns any custom data event was created with. Typically used for synthetic events.
programming_docs
deno GPUComputePipeline GPUComputePipeline ================== ``` class GPUComputePipeline implements [GPUObjectBase](gpuobjectbase), [GPUPipelineBase](gpupipelinebase) { label: string; getBindGroupLayout(index: number): [GPUBindGroupLayout](gpubindgrouplayout); } ``` Implements ---------- > `[GPUObjectBase](gpuobjectbase)` > `[GPUPipelineBase](gpupipelinebase)` Properties ---------- > `label: string` Methods ------- > `getBindGroupLayout(index: number): [GPUBindGroupLayout](gpubindgrouplayout)` deno CustomEventInit CustomEventInit =============== ``` interface CustomEventInit <T = any> extends [EventInit](eventinit) {detail?: T; } ``` Type Parameters --------------- > `T = any` Extends ------- > `[EventInit](eventinit)` Properties ---------- > `detail?: T` deno RequestDestination RequestDestination ================== ``` type RequestDestination = | "" | "audio" | "audioworklet" | "document" | "embed" | "font" | "image" | "manifest" | "object" | "paintworklet" | "report" | "script" | "sharedworker" | "style" | "track" | "video" | "worker" | "xslt"; ``` Type ---- > `"" | "audio" | "audioworklet" | "document" | "embed" | "font" | "image" | "manifest" | "object" | "paintworklet" | "report" | "script" | "sharedworker" | "style" | "track" | "video" | "worker" | "xslt"` deno SubtleCrypto SubtleCrypto ============ This Web Crypto API interface provides a number of low-level cryptographic functions. It is accessed via the Crypto.subtle properties available in a window context (via Window.crypto). ``` interface SubtleCrypto {decrypt( algorithm: | [AlgorithmIdentifier](algorithmidentifier) | [RsaOaepParams](rsaoaepparams) | [AesCbcParams](aescbcparams) | [AesGcmParams](aesgcmparams) | [AesCtrParams](aesctrparams) , key: [CryptoKey](cryptokey), data: [BufferSource](buffersource),): Promise<ArrayBuffer>; deriveBits( algorithm: | [AlgorithmIdentifier](algorithmidentifier) | [HkdfParams](hkdfparams) | [Pbkdf2Params](pbkdf2params) | [EcdhKeyDeriveParams](ecdhkeyderiveparams) , baseKey: [CryptoKey](cryptokey), length: number,): Promise<ArrayBuffer>; deriveKey( algorithm: | [AlgorithmIdentifier](algorithmidentifier) | [HkdfParams](hkdfparams) | [Pbkdf2Params](pbkdf2params) | [EcdhKeyDeriveParams](ecdhkeyderiveparams) , baseKey: [CryptoKey](cryptokey), derivedKeyType: | [AlgorithmIdentifier](algorithmidentifier) | [AesDerivedKeyParams](aesderivedkeyparams) | [HmacImportParams](hmacimportparams) | [HkdfParams](hkdfparams) | [Pbkdf2Params](pbkdf2params) , extractable: boolean, keyUsages: [KeyUsage](keyusage)[],): Promise<[CryptoKey](cryptokey)>; digest(algorithm: [AlgorithmIdentifier](algorithmidentifier), data: [BufferSource](buffersource)): Promise<ArrayBuffer>; encrypt( algorithm: | [AlgorithmIdentifier](algorithmidentifier) | [RsaOaepParams](rsaoaepparams) | [AesCbcParams](aescbcparams) | [AesGcmParams](aesgcmparams) | [AesCtrParams](aesctrparams) , key: [CryptoKey](cryptokey), data: [BufferSource](buffersource),): Promise<ArrayBuffer>; exportKey(format: "jwk", key: [CryptoKey](cryptokey)): Promise<[JsonWebKey](jsonwebkey)>; exportKey(format: Exclude<[KeyFormat](keyformat), "jwk">, key: [CryptoKey](cryptokey)): Promise<ArrayBuffer>; generateKey( algorithm: [RsaHashedKeyGenParams](rsahashedkeygenparams) | [EcKeyGenParams](eckeygenparams), extractable: boolean, keyUsages: [KeyUsage](keyusage)[],): Promise<[CryptoKeyPair](cryptokeypair)>; generateKey( algorithm: [AesKeyGenParams](aeskeygenparams) | [HmacKeyGenParams](hmackeygenparams), extractable: boolean, keyUsages: [KeyUsage](keyusage)[],): Promise<[CryptoKey](cryptokey)>; generateKey( algorithm: [AlgorithmIdentifier](algorithmidentifier), extractable: boolean, keyUsages: [KeyUsage](keyusage)[],): Promise<[CryptoKeyPair](cryptokeypair) | [CryptoKey](cryptokey)>; importKey( format: "jwk", keyData: [JsonWebKey](jsonwebkey), algorithm: | [AlgorithmIdentifier](algorithmidentifier) | [HmacImportParams](hmacimportparams) | [RsaHashedImportParams](rsahashedimportparams) | [EcKeyImportParams](eckeyimportparams) , extractable: boolean, keyUsages: [KeyUsage](keyusage)[],): Promise<[CryptoKey](cryptokey)>; importKey( format: Exclude<[KeyFormat](keyformat), "jwk">, keyData: [BufferSource](buffersource), algorithm: | [AlgorithmIdentifier](algorithmidentifier) | [HmacImportParams](hmacimportparams) | [RsaHashedImportParams](rsahashedimportparams) | [EcKeyImportParams](eckeyimportparams) , extractable: boolean, keyUsages: [KeyUsage](keyusage)[],): Promise<[CryptoKey](cryptokey)>; sign( algorithm: [AlgorithmIdentifier](algorithmidentifier) | [RsaPssParams](rsapssparams) | [EcdsaParams](ecdsaparams), key: [CryptoKey](cryptokey), data: [BufferSource](buffersource),): Promise<ArrayBuffer>; unwrapKey( format: [KeyFormat](keyformat), wrappedKey: [BufferSource](buffersource), unwrappingKey: [CryptoKey](cryptokey), unwrapAlgorithm: | [AlgorithmIdentifier](algorithmidentifier) | [RsaOaepParams](rsaoaepparams) | [AesCbcParams](aescbcparams) | [AesCtrParams](aesctrparams) , unwrappedKeyAlgorithm: | [AlgorithmIdentifier](algorithmidentifier) | [HmacImportParams](hmacimportparams) | [RsaHashedImportParams](rsahashedimportparams) | [EcKeyImportParams](eckeyimportparams) , extractable: boolean, keyUsages: [KeyUsage](keyusage)[],): Promise<[CryptoKey](cryptokey)>; verify( algorithm: [AlgorithmIdentifier](algorithmidentifier) | [RsaPssParams](rsapssparams) | [EcdsaParams](ecdsaparams), key: [CryptoKey](cryptokey), signature: [BufferSource](buffersource), data: [BufferSource](buffersource),): Promise<boolean>; wrapKey( format: [KeyFormat](keyformat), key: [CryptoKey](cryptokey), wrappingKey: [CryptoKey](cryptokey), wrapAlgorithm: | [AlgorithmIdentifier](algorithmidentifier) | [RsaOaepParams](rsaoaepparams) | [AesCbcParams](aescbcparams) | [AesCtrParams](aesctrparams) ,): Promise<ArrayBuffer>; } ``` ``` var SubtleCrypto: {prototype: [SubtleCrypto](subtlecrypto); new (): [SubtleCrypto](subtlecrypto); }; ``` Methods ------- > `decrypt( > algorithm: > > | [AlgorithmIdentifier](algorithmidentifier) > > > | [RsaOaepParams](rsaoaepparams) > > > | [AesCbcParams](aescbcparams) > > > | [AesGcmParams](aesgcmparams) > > > | [AesCtrParams](aesctrparams) > > , > > key: [CryptoKey](cryptokey), > > data: [BufferSource](buffersource),): Promise<ArrayBuffer>` > `deriveBits( > algorithm: > > | [AlgorithmIdentifier](algorithmidentifier) > > > | [HkdfParams](hkdfparams) > > > | [Pbkdf2Params](pbkdf2params) > > > | [EcdhKeyDeriveParams](ecdhkeyderiveparams) > > , > > baseKey: [CryptoKey](cryptokey), > > length: number,): Promise<ArrayBuffer>` > `deriveKey( > algorithm: > > | [AlgorithmIdentifier](algorithmidentifier) > > > | [HkdfParams](hkdfparams) > > > | [Pbkdf2Params](pbkdf2params) > > > | [EcdhKeyDeriveParams](ecdhkeyderiveparams) > > , > > baseKey: [CryptoKey](cryptokey), > > derivedKeyType: > > | [AlgorithmIdentifier](algorithmidentifier) > > > | [AesDerivedKeyParams](aesderivedkeyparams) > > > | [HmacImportParams](hmacimportparams) > > > | [HkdfParams](hkdfparams) > > > | [Pbkdf2Params](pbkdf2params) > > , > > extractable: boolean, > > keyUsages: [KeyUsage](keyusage)[],): Promise<[CryptoKey](cryptokey)>` > `digest(algorithm: [AlgorithmIdentifier](algorithmidentifier), data: [BufferSource](buffersource)): Promise<ArrayBuffer>` > `encrypt( > algorithm: > > | [AlgorithmIdentifier](algorithmidentifier) > > > | [RsaOaepParams](rsaoaepparams) > > > | [AesCbcParams](aescbcparams) > > > | [AesGcmParams](aesgcmparams) > > > | [AesCtrParams](aesctrparams) > > , > > key: [CryptoKey](cryptokey), > > data: [BufferSource](buffersource),): Promise<ArrayBuffer>` > `exportKey(format: "jwk", key: [CryptoKey](cryptokey)): Promise<[JsonWebKey](jsonwebkey)>` > `exportKey(format: Exclude<[KeyFormat](keyformat), "jwk">, key: [CryptoKey](cryptokey)): Promise<ArrayBuffer>` > `generateKey( > algorithm: [RsaHashedKeyGenParams](rsahashedkeygenparams) | [EcKeyGenParams](eckeygenparams), > > extractable: boolean, > > keyUsages: [KeyUsage](keyusage)[],): Promise<[CryptoKeyPair](cryptokeypair)>` > `generateKey( > algorithm: [AesKeyGenParams](aeskeygenparams) | [HmacKeyGenParams](hmackeygenparams), > > extractable: boolean, > > keyUsages: [KeyUsage](keyusage)[],): Promise<[CryptoKey](cryptokey)>` > `generateKey( > algorithm: [AlgorithmIdentifier](algorithmidentifier), > > extractable: boolean, > > keyUsages: [KeyUsage](keyusage)[],): Promise<[CryptoKeyPair](cryptokeypair) | [CryptoKey](cryptokey)>` > `importKey( > format: "jwk", > > keyData: [JsonWebKey](jsonwebkey), > > algorithm: > > | [AlgorithmIdentifier](algorithmidentifier) > > > | [HmacImportParams](hmacimportparams) > > > | [RsaHashedImportParams](rsahashedimportparams) > > > | [EcKeyImportParams](eckeyimportparams) > > , > > extractable: boolean, > > keyUsages: [KeyUsage](keyusage)[],): Promise<[CryptoKey](cryptokey)>` > `importKey( > format: Exclude<[KeyFormat](keyformat), "jwk">, > > keyData: [BufferSource](buffersource), > > algorithm: > > | [AlgorithmIdentifier](algorithmidentifier) > > > | [HmacImportParams](hmacimportparams) > > > | [RsaHashedImportParams](rsahashedimportparams) > > > | [EcKeyImportParams](eckeyimportparams) > > , > > extractable: boolean, > > keyUsages: [KeyUsage](keyusage)[],): Promise<[CryptoKey](cryptokey)>` > `sign( > algorithm: [AlgorithmIdentifier](algorithmidentifier) | [RsaPssParams](rsapssparams) | [EcdsaParams](ecdsaparams), > > key: [CryptoKey](cryptokey), > > data: [BufferSource](buffersource),): Promise<ArrayBuffer>` > `unwrapKey( > format: [KeyFormat](keyformat), > > wrappedKey: [BufferSource](buffersource), > > unwrappingKey: [CryptoKey](cryptokey), > > unwrapAlgorithm: > > | [AlgorithmIdentifier](algorithmidentifier) > > > | [RsaOaepParams](rsaoaepparams) > > > | [AesCbcParams](aescbcparams) > > > | [AesCtrParams](aesctrparams) > > , > > unwrappedKeyAlgorithm: > > | [AlgorithmIdentifier](algorithmidentifier) > > > | [HmacImportParams](hmacimportparams) > > > | [RsaHashedImportParams](rsahashedimportparams) > > > | [EcKeyImportParams](eckeyimportparams) > > , > > extractable: boolean, > > keyUsages: [KeyUsage](keyusage)[],): Promise<[CryptoKey](cryptokey)>` > `verify( > algorithm: [AlgorithmIdentifier](algorithmidentifier) | [RsaPssParams](rsapssparams) | [EcdsaParams](ecdsaparams), > > key: [CryptoKey](cryptokey), > > signature: [BufferSource](buffersource), > > data: [BufferSource](buffersource),): Promise<boolean>` > `wrapKey( > format: [KeyFormat](keyformat), > > key: [CryptoKey](cryptokey), > > wrappingKey: [CryptoKey](cryptokey), > > wrapAlgorithm: > > | [AlgorithmIdentifier](algorithmidentifier) > > > | [RsaOaepParams](rsaoaepparams) > > > | [AesCbcParams](aescbcparams) > > > | [AesCtrParams](aesctrparams) > > ,): Promise<ArrayBuffer>` deno Deno.link Deno.link ========= Creates `newpath` as a hard link to `oldpath`. ``` await Deno.link("old/name", "new/name"); ``` Requires `allow-read` and `allow-write` permissions. ``` function link(oldpath: string, newpath: string): Promise<void>; ``` > `link(oldpath: string, newpath: string): Promise<void>` ### Parameters > `oldpath: string` > `newpath: string` ### Return Type > `Promise<void>` deno WritableStreamDefaultControllerStartCallback WritableStreamDefaultControllerStartCallback ============================================ ``` interface WritableStreamDefaultControllerStartCallback {(controller: [WritableStreamDefaultController](writablestreamdefaultcontroller)): void | PromiseLike<void>;} ``` Call Signatures --------------- > `(controller: [WritableStreamDefaultController](writablestreamdefaultcontroller)): void | PromiseLike<void>` deno Deno.copyFile Deno.copyFile ============= Copies the contents and permissions of one file to another specified path, by default creating a new file if needed, else overwriting. Fails if target path is a directory or is unwritable. ``` await Deno.copyFile("from.txt", "to.txt"); ``` Requires `allow-read` permission on `fromPath`. Requires `allow-write` permission on `toPath`. ``` function copyFile(fromPath: string | [URL](url), toPath: string | [URL](url)): Promise<void>; ``` > `copyFile(fromPath: string | [URL](url), toPath: string | [URL](url)): Promise<void>` ### Parameters > `fromPath: string | [URL](url)` > `toPath: string | [URL](url)` ### Return Type > `Promise<void>` deno CacheStorage CacheStorage ============ ``` interface CacheStorage {delete(cacheName: string): Promise<boolean>; has(cacheName: string): Promise<boolean>; open(cacheName: string): Promise<[Cache](cache)>; } ``` ``` var CacheStorage: {prototype: [CacheStorage](cachestorage); new (): [CacheStorage](cachestorage); }; ``` Methods ------- > `delete(cacheName: string): Promise<boolean>` Delete cache storage for the provided name. > `has(cacheName: string): Promise<boolean>` Check if cache already exists for the provided name. > `open(cacheName: string): Promise<[Cache](cache)>` Open a cache storage for the provided name. deno Event Event ===== An event which takes place in the DOM. ``` class Event { constructor(type: string, eventInitDict?: [EventInit](eventinit)); readonly AT\_TARGET: number; readonly bubbles: boolean; readonly BUBBLING\_PHASE: number; readonly cancelable: boolean; cancelBubble: boolean; readonly CAPTURING\_PHASE: number; readonly composed: boolean; readonly currentTarget: [EventTarget](eventtarget) | null; readonly defaultPrevented: boolean; readonly eventPhase: number; readonly isTrusted: boolean; readonly NONE: number; readonly target: [EventTarget](eventtarget) | null; readonly timeStamp: number; readonly type: string; composedPath(): [EventTarget](eventtarget)[]; preventDefault(): void; stopImmediatePropagation(): void; stopPropagation(): void; static readonly AT\_TARGET: number; static readonly BUBBLING\_PHASE: number; static readonly CAPTURING\_PHASE: number; static readonly NONE: number; } ``` Constructors ------------ > `new Event(type: string, eventInitDict?: [EventInit](eventinit))` Properties ---------- > `AT_TARGET: number` > `bubbles: boolean` Returns true or false depending on how event was initialized. True if event goes through its target's ancestors in reverse tree order, and false otherwise. > `BUBBLING_PHASE: number` > `cancelable: boolean` Returns true or false depending on how event was initialized. Its return value does not always carry meaning, but true can indicate that part of the operation during which event was dispatched, can be canceled by invoking the preventDefault() method. > `cancelBubble: boolean` > `CAPTURING_PHASE: number` > `composed: boolean` Returns true or false depending on how event was initialized. True if event invokes listeners past a ShadowRoot node that is the root of its target, and false otherwise. > `currentTarget: [EventTarget](eventtarget) | null` Returns the object whose event listener's callback is currently being invoked. > `defaultPrevented: boolean` Returns true if preventDefault() was invoked successfully to indicate cancellation, and false otherwise. > `eventPhase: number` Returns the event's phase, which is one of NONE, CAPTURING\_PHASE, AT\_TARGET, and BUBBLING\_PHASE. > `isTrusted: boolean` Returns true if event was dispatched by the user agent, and false otherwise. > `NONE: number` > `target: [EventTarget](eventtarget) | null` Returns the object to which event is dispatched (its target). > `timeStamp: number` Returns the event's timestamp as the number of milliseconds measured relative to the time origin. > `type: string` Returns the type of event, e.g. "click", "hashchange", or "submit". Methods ------- > `composedPath(): [EventTarget](eventtarget)[]` Returns the invocation target objects of event's path (objects on which listeners will be invoked), except for any nodes in shadow trees of which the shadow root's mode is "closed" that are not reachable from event's currentTarget. > `preventDefault(): void` If invoked when the cancelable attribute value is true, and while executing a listener for the event with passive set to false, signals to the operation that caused event to be dispatched that it needs to be canceled. > `stopImmediatePropagation(): void` Invoking this method prevents event from reaching any registered event listeners after the current one finishes running and, when dispatched in a tree, also prevents event from reaching any other objects. > `stopPropagation(): void` When dispatched in a tree, invoking this method prevents event from reaching any objects other than the current object. Static Properties ----------------- > `AT_TARGET: number` > `BUBBLING_PHASE: number` > `CAPTURING_PHASE: number` > `NONE: number` deno RequestMode RequestMode =========== ``` type RequestMode = | "cors" | "navigate" | "no-cors" | "same-origin"; ``` Type ---- > `"cors" | "navigate" | "no-cors" | "same-origin"` deno CountQueuingStrategy CountQueuingStrategy ==================== This Streams API interface provides a built-in byte length queuing strategy that can be used when constructing streams. ``` interface CountQueuingStrategy extends [QueuingStrategy](queuingstrategy) {highWaterMark: number; size(chunk: any): 1; } ``` ``` var CountQueuingStrategy: {prototype: [CountQueuingStrategy](countqueuingstrategy); new (options: {highWaterMark: number; }): [CountQueuingStrategy](countqueuingstrategy); }; ``` Extends ------- > `[QueuingStrategy](queuingstrategy)` Properties ---------- > `highWaterMark: number` Methods ------- > `size(chunk: any): 1` deno GPUVertexState GPUVertexState ============== ``` interface GPUVertexState extends [GPUProgrammableStage](gpuprogrammablestage) {buffers?: ([GPUVertexBufferLayout](gpuvertexbufferlayout) | null)[]; } ``` Extends ------- > `[GPUProgrammableStage](gpuprogrammablestage)` Properties ---------- > `buffers?: ([GPUVertexBufferLayout](gpuvertexbufferlayout) | null)[]` deno GPUPrimitiveState GPUPrimitiveState ================= ``` interface GPUPrimitiveState {cullMode?: [GPUCullMode](gpucullmode); frontFace?: [GPUFrontFace](gpufrontface); stripIndexFormat?: [GPUIndexFormat](gpuindexformat); topology?: [GPUPrimitiveTopology](gpuprimitivetopology); unclippedDepth?: boolean; } ``` Properties ---------- > `cullMode?: [GPUCullMode](gpucullmode)` > `frontFace?: [GPUFrontFace](gpufrontface)` > `stripIndexFormat?: [GPUIndexFormat](gpuindexformat)` > `topology?: [GPUPrimitiveTopology](gpuprimitivetopology)` > `unclippedDepth?: boolean` deno Deno.PermissionOptions Deno.PermissionOptions ====================== Options which define the permissions within a test or worker context. `"inherit"` ensures that all permissions of the parent process will be applied to the test context. `"none"` ensures the test context has no permissions. A `PermissionOptionsObject` provides a more specific set of permissions to the test context. ``` type PermissionOptions = "inherit" | "none" | [PermissionOptionsObject](deno.permissionoptionsobject); ``` Type ---- > `"inherit" | "none" | [PermissionOptionsObject](deno.permissionoptionsobject)` deno RsaOaepParams RsaOaepParams ============= ``` interface RsaOaepParams extends [Algorithm](algorithm) {label?: Uint8Array; } ``` Extends ------- > `[Algorithm](algorithm)` Properties ---------- > `label?: Uint8Array`
programming_docs
deno Deno.writeTextFile Deno.writeTextFile ================== Write string `data` to the given `path`, by default creating a new file if needed, else overwriting. ``` await Deno.writeTextFile("hello1.txt", "Hello world\n"); // overwrite "hello1.txt" or create it ``` Requires `allow-write` permission, and `allow-read` if `options.create` is `false`. ``` function writeTextFile( path: string | [URL](url), data: string, options?: [WriteFileOptions](deno.writefileoptions),): Promise<void>; ``` > `writeTextFile(path: string | [URL](url), data: string, options?: [WriteFileOptions](deno.writefileoptions)): Promise<void>` ### Parameters > `path: string | [URL](url)` > `data: string` > `options?: [WriteFileOptions](deno.writefileoptions) optional` ### Return Type > `Promise<void>` deno Deno.fsync Deno.fsync ========== Flushes any pending data and metadata operations of the given file stream to disk. ``` const file = await Deno.open( "my\_file.txt", { read: true, write: true, create: true }, ); await Deno.write(file.rid, new TextEncoder().encode("Hello World")); await Deno.ftruncate(file.rid, 1); await Deno.fsync(file.rid); console.log(new TextDecoder().decode(await Deno.readFile("my\_file.txt"))); // H ``` ``` function fsync(rid: number): Promise<void>; ``` > `fsync(rid: number): Promise<void>` ### Parameters > `rid: number` ### Return Type > `Promise<void>` deno Deno.addSignalListener Deno.addSignalListener ====================== Registers the given function as a listener of the given signal event. ``` Deno.addSignalListener( "SIGTERM", () => { console.log("SIGTERM!") } ); ``` *Note*: On Windows only `"SIGINT"` (CTRL+C) and `"SIGBREAK"` (CTRL+Break) are supported. ``` function addSignalListener(signal: [Signal](deno.signal), handler: () => void): void; ``` > `addSignalListener(signal: [Signal](deno.signal), handler: () => void): void` ### Parameters > `signal: [Signal](deno.signal)` > `handler: () => void` ### Return Type > `void` deno WritableStreamDefaultControllerWriteCallback WritableStreamDefaultControllerWriteCallback ============================================ ``` interface WritableStreamDefaultControllerWriteCallback <W> {(chunk: W, controller: [WritableStreamDefaultController](writablestreamdefaultcontroller)): void | PromiseLike<void>;} ``` Type Parameters --------------- > `W` Call Signatures --------------- > `(chunk: W, controller: [WritableStreamDefaultController](writablestreamdefaultcontroller)): void | PromiseLike<void>` deno HmacImportParams HmacImportParams ================ ``` interface HmacImportParams extends [Algorithm](algorithm) {hash: [HashAlgorithmIdentifier](hashalgorithmidentifier); length?: number; } ``` Extends ------- > `[Algorithm](algorithm)` Properties ---------- > `hash: [HashAlgorithmIdentifier](hashalgorithmidentifier)` > `length?: number` deno CacheQueryOptions CacheQueryOptions ================= ``` interface CacheQueryOptions {ignoreMethod?: boolean; ignoreSearch?: boolean; ignoreVary?: boolean; } ``` Properties ---------- > `ignoreMethod?: boolean` > `ignoreSearch?: boolean` > `ignoreVary?: boolean` deno Deno.FsEvent Deno.FsEvent ============ Represents a unique file system event yielded by a [`Deno.FsWatcher`](deno#FsWatcher). ``` interface FsEvent {flag?: [FsEventFlag](deno.fseventflag); kind: | "any" | "access" | "create" | "modify" | "remove" | "other"; paths: string[]; } ``` Properties ---------- > `flag?: [FsEventFlag](deno.fseventflag)` Any additional flags associated with the event. > `kind: "any" | "access" | "create" | "modify" | "remove" | "other"` The kind/type of the file system event. > `paths: string[]` An array of paths that are associated with the file system event. deno WebAssembly.TableKind WebAssembly.TableKind ===================== ``` type TableKind = "anyfunc"; ``` Type ---- > `"anyfunc"` deno Deno.fdatasync Deno.fdatasync ============== Flushes any pending data operations of the given file stream to disk. ``` const file = await Deno.open( "my\_file.txt", { read: true, write: true, create: true }, ); await Deno.write(file.rid, new TextEncoder().encode("Hello World")); await Deno.fdatasync(file.rid); console.log(new TextDecoder().decode(await Deno.readFile("my\_file.txt"))); // Hello World ``` ``` function fdatasync(rid: number): Promise<void>; ``` > `fdatasync(rid: number): Promise<void>` ### Parameters > `rid: number` ### Return Type > `Promise<void>` deno GPUBindGroupLayoutDescriptor GPUBindGroupLayoutDescriptor ============================ ``` interface GPUBindGroupLayoutDescriptor extends [GPUObjectDescriptorBase](gpuobjectdescriptorbase) {entries: [GPUBindGroupLayoutEntry](gpubindgrouplayoutentry)[]; } ``` Extends ------- > `[GPUObjectDescriptorBase](gpuobjectdescriptorbase)` Properties ---------- > `entries: [GPUBindGroupLayoutEntry](gpubindgrouplayoutentry)[]` deno GPUFilterMode GPUFilterMode ============= ``` type GPUFilterMode = "nearest" | "linear"; ``` Type ---- > `"nearest" | "linear"` deno GPUStorageTextureAccess GPUStorageTextureAccess ======================= ``` type GPUStorageTextureAccess = "write-only"; ``` Type ---- > `"write-only"` deno alert alert ===== Shows the given message and waits for the enter key pressed. If the stdin is not interactive, it does nothing. ``` function alert(message?: string): void; ``` > `alert(message?: string): void` ### Parameters > `message?: string optional` ### Return Type > `void` deno ReadableStreamReadDoneResult ReadableStreamReadDoneResult ============================ ``` interface ReadableStreamReadDoneResult <T> {done: true; value?: T; } ``` Type Parameters --------------- > `T` Properties ---------- > `done: true` > `value?: T` deno DecompressionStream DecompressionStream =================== An API for decompressing a stream of data. @example ``` const input = await Deno.open("./file.txt.gz"); const output = await Deno.create("./file.txt"); await input.readable .pipeThrough(new DecompressionStream("gzip")) .pipeTo(output.writable); ``` ``` class DecompressionStream { constructor(format: string); readonly readable: [ReadableStream](readablestream)<Uint8Array>; readonly writable: [WritableStream](writablestream)<Uint8Array>; } ``` Constructors ------------ > `new DecompressionStream(format: string)` Creates a new `DecompressionStream` object which decompresses a stream of data. Throws a `TypeError` if the format passed to the constructor is not supported. Properties ---------- > `readable: [ReadableStream](readablestream)<Uint8Array>` > `writable: [WritableStream](writablestream)<Uint8Array>` deno ReadableStreamBYOBReader ReadableStreamBYOBReader ======================== ``` interface ReadableStreamBYOBReader { readonly closed: Promise<void>; cancel(reason?: any): Promise<void>; read<V extends ArrayBufferView>(view: V): Promise<[ReadableStreamBYOBReadResult](readablestreambyobreadresult)<V>>; releaseLock(): void; } ``` Properties ---------- > `readonly closed: Promise<void>` Methods ------- > `cancel(reason?: any): Promise<void>` > `read<V extends ArrayBufferView>(view: V): Promise<[ReadableStreamBYOBReadResult](readablestreambyobreadresult)<V>>` > `releaseLock(): void` deno Deno.env Deno.env ======== An interface containing methods to interact with the process environment variables. ``` const env: [Env](deno.env); ``` deno CryptoKey CryptoKey ========= The CryptoKey dictionary of the Web Crypto API represents a cryptographic key. ``` interface CryptoKey { readonly algorithm: [KeyAlgorithm](keyalgorithm); readonly extractable: boolean; readonly type: [KeyType](keytype); readonly usages: [KeyUsage](keyusage)[]; } ``` ``` var CryptoKey: {prototype: [CryptoKey](cryptokey); new (): [CryptoKey](cryptokey); }; ``` Properties ---------- > `readonly algorithm: [KeyAlgorithm](keyalgorithm)` > `readonly extractable: boolean` > `readonly type: [KeyType](keytype)` > `readonly usages: [KeyUsage](keyusage)[]` deno Deno.WritePermissionDescriptor Deno.WritePermissionDescriptor ============================== ``` interface WritePermissionDescriptor {name: "write"; path?: string | [URL](url); } ``` Properties ---------- > `name: "write"` > `path?: string | [URL](url)` deno Deno.makeTempFileSync Deno.makeTempFileSync ===================== Synchronously creates a new temporary file in the default directory for temporary files, unless `dir` is specified. Other options include prefixing and suffixing the directory name with `prefix` and `suffix` respectively. The full path to the newly created file is returned. Multiple programs calling this function simultaneously will create different files. It is the caller's responsibility to remove the file when no longer needed. ``` const tempFileName0 = Deno.makeTempFileSync(); // e.g. /tmp/419e0bf2 const tempFileName1 = Deno.makeTempFileSync({ prefix: 'my\_temp' }); // e.g. /tmp/my\_temp754d3098 ``` Requires `allow-write` permission. ``` function makeTempFileSync(options?: [MakeTempOptions](deno.maketempoptions)): string; ``` > `makeTempFileSync(options?: [MakeTempOptions](deno.maketempoptions)): string` ### Parameters > `options?: [MakeTempOptions](deno.maketempoptions) optional` ### Return Type > `string` deno PerformanceEntryList PerformanceEntryList ==================== ``` type PerformanceEntryList = [PerformanceEntry](performanceentry)[]; ``` Type ---- > `[PerformanceEntry](performanceentry)[]` deno structuredClone structuredClone =============== Creates a deep copy of a given value using the structured clone algorithm. Unlike a shallow copy, a deep copy does not hold the same references as the source object, meaning its properties can be changed without affecting the source. For more details, see [MDN](https://developer.mozilla.org/en-US/docs/Glossary/Deep_copy). Throws a `DataCloneError` if any part of the input value is not serializable. @example ``` const object = { x: 0, y: 1 }; const deepCopy = structuredClone(object); deepCopy.x = 1; console.log(deepCopy.x, object.x); // 1 0 const shallowCopy = object; shallowCopy.x = 1; // shallowCopy.x is pointing to the same location in memory as object.x console.log(shallowCopy.x, object.x); // 1 1 ``` ``` function structuredClone(value: any, options?: [StructuredSerializeOptions](structuredserializeoptions)): any; ``` > `structuredClone(value: any, options?: [StructuredSerializeOptions](structuredserializeoptions)): any` ### Parameters > `value: any` > `options?: [StructuredSerializeOptions](structuredserializeoptions) optional` ### Return Type > `any` deno Deno.ResourceMap Deno.ResourceMap ================ A map of open resources that Deno is tracking. The key is the resource ID (*rid*) and the value is its representation. ``` interface ResourceMap {[rid: number]: unknown;} ``` Index Signatures ---------------- [rid: number]: unknown deno Deno.stdin Deno.stdin ========== A reference to `stdin` which can be used to read directly from `stdin`. It implements the Deno specific [`Reader`](deno.reader), [`ReaderSync`](deno.readersync), and [`Closer`](deno.closer) interfaces as well as provides a [`ReadableStream`](readablestream) interface. ### Reading chunks from the readable stream ``` const decoder = new TextDecoder(); for await (const chunk of Deno.stdin.readable) { const text = decoder.decode(chunk); // do something with the text } ``` ``` const stdin: & [Reader](deno.reader) & [ReaderSync](deno.readersync) & [Closer](deno.closer) & {readonly rid: number; readonly readable: [ReadableStream](readablestream)<Uint8Array>; setRaw(mode: boolean, options?: [SetRawOptions](deno.setrawoptions)): void; }; ``` deno WebAssembly.Memory WebAssembly.Memory ================== The `WebAssembly.Memory` object is a resizable `ArrayBuffer` or `SharedArrayBuffer` that holds the raw bytes of memory accessed by a WebAssembly Instance. A memory created by JavaScript or in WebAssembly code will be accessible and mutable from both JavaScript and WebAssembly. [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WebAssembly/Memory) ``` class Memory { constructor(descriptor: [MemoryDescriptor](webassembly.memorydescriptor)); readonly buffer: ArrayBuffer | SharedArrayBuffer; grow(delta: number): number; } ``` Constructors ------------ > `new Memory(descriptor: [MemoryDescriptor](webassembly.memorydescriptor))` Creates a new `Memory` object. Properties ---------- > `buffer: ArrayBuffer | SharedArrayBuffer` An accessor property that returns the buffer contained in the memory. Methods ------- > `grow(delta: number): number` Increases the size of the memory instance by a specified number of WebAssembly pages (each one is 64KB in size). deno GPUBindGroupEntry GPUBindGroupEntry ================= ``` interface GPUBindGroupEntry {binding: number; resource: [GPUBindingResource](gpubindingresource); } ``` Properties ---------- > `binding: number` > `resource: [GPUBindingResource](gpubindingresource)` deno ReadableByteStreamController ReadableByteStreamController ============================ ``` interface ReadableByteStreamController { readonly byobRequest: [ReadableStreamBYOBRequest](readablestreambyobrequest) | null; readonly desiredSize: number | null; close(): void; enqueue(chunk: ArrayBufferView): void; error(error?: any): void; } ``` ``` var ReadableByteStreamController: {prototype: [ReadableByteStreamController](readablebytestreamcontroller); new (): [ReadableByteStreamController](readablebytestreamcontroller); }; ``` Properties ---------- > `readonly byobRequest: [ReadableStreamBYOBRequest](readablestreambyobrequest) | null` > `readonly desiredSize: number | null` Methods ------- > `close(): void` > `enqueue(chunk: ArrayBufferView): void` > `error(error?: any): void` deno CloseEvent CloseEvent ========== ``` class CloseEvent extends Event { constructor(type: string, eventInitDict?: [CloseEventInit](closeeventinit)); readonly code: number; readonly reason: string; readonly wasClean: boolean; } ``` Extends ------- > `Event` Constructors ------------ > `new CloseEvent(type: string, eventInitDict?: [CloseEventInit](closeeventinit))` Properties ---------- > `code: number` Returns the WebSocket connection close code provided by the server. > `reason: string` Returns the WebSocket connection close reason provided by the server. > `wasClean: boolean` Returns true if the connection closed cleanly; false otherwise. deno GPUTextureAspect GPUTextureAspect ================ ``` type GPUTextureAspect = "all" | "stencil-only" | "depth-only"; ``` Type ---- > `"all" | "stencil-only" | "depth-only"` deno WebAssembly.TableDescriptor WebAssembly.TableDescriptor =========================== The `TableDescriptor` describes the options you can pass to `new WebAssembly.Table()`. ``` interface TableDescriptor {element: [TableKind](webassembly.tablekind); initial: number; maximum?: number; } ``` Properties ---------- > `element: [TableKind](webassembly.tablekind)` > `initial: number` > `maximum?: number` deno Deno.WebSocketUpgrade Deno.WebSocketUpgrade ===================== ``` interface WebSocketUpgrade {response: [Response](response); socket: [WebSocket](websocket); } ``` Properties ---------- > `response: [Response](response)` > `socket: [WebSocket](websocket)` deno GPUStencilOperation GPUStencilOperation =================== ``` type GPUStencilOperation = | "keep" | "zero" | "replace" | "invert" | "increment-clamp" | "decrement-clamp" | "increment-wrap" | "decrement-wrap"; ``` Type ---- > `"keep" | "zero" | "replace" | "invert" | "increment-clamp" | "decrement-clamp" | "increment-wrap" | "decrement-wrap"` deno EcKeyImportParams EcKeyImportParams ================= ``` interface EcKeyImportParams extends [Algorithm](algorithm) {namedCurve: [NamedCurve](namedcurve); } ``` Extends ------- > `[Algorithm](algorithm)` Properties ---------- > `namedCurve: [NamedCurve](namedcurve)` deno GPUBufferBindingLayout GPUBufferBindingLayout ====================== ``` interface GPUBufferBindingLayout {hasDynamicOffset?: boolean; minBindingSize?: number; type?: [GPUBufferBindingType](gpubufferbindingtype); } ``` Properties ---------- > `hasDynamicOffset?: boolean` > `minBindingSize?: number` > `type?: [GPUBufferBindingType](gpubufferbindingtype)` deno GPUTextureViewDescriptor GPUTextureViewDescriptor ======================== ``` interface GPUTextureViewDescriptor extends [GPUObjectDescriptorBase](gpuobjectdescriptorbase) {arrayLayerCount?: number; aspect?: [GPUTextureAspect](gputextureaspect); baseArrayLayer?: number; baseMipLevel?: number; dimension?: [GPUTextureViewDimension](gputextureviewdimension); format?: [GPUTextureFormat](gputextureformat); mipLevelCount?: number; } ``` Extends ------- > `[GPUObjectDescriptorBase](gpuobjectdescriptorbase)` Properties ---------- > `arrayLayerCount?: number` > `aspect?: [GPUTextureAspect](gputextureaspect)` > `baseArrayLayer?: number` > `baseMipLevel?: number` > `dimension?: [GPUTextureViewDimension](gputextureviewdimension)` > `format?: [GPUTextureFormat](gputextureformat)` > `mipLevelCount?: number` deno BroadcastChannelEventMap BroadcastChannelEventMap ======================== ``` interface BroadcastChannelEventMap {message: [MessageEvent](messageevent); messageerror: [MessageEvent](messageevent); } ``` Properties ---------- > `message: [MessageEvent](messageevent)` > `messageerror: [MessageEvent](messageevent)` deno GPUColorWriteFlags GPUColorWriteFlags ================== ``` type GPUColorWriteFlags = number; ``` Type ---- > `number` deno Deno.OpMetrics Deno.OpMetrics ============== ``` interface OpMetrics {bytesReceived: number; bytesSentControl: number; bytesSentData: number; opsCompleted: number; opsCompletedAsync: number; opsCompletedAsyncUnref: number; opsCompletedSync: number; opsDispatched: number; opsDispatchedAsync: number; opsDispatchedAsyncUnref: number; opsDispatchedSync: number; } ``` Properties ---------- > `bytesReceived: number` > `bytesSentControl: number` > `bytesSentData: number` > `opsCompleted: number` > `opsCompletedAsync: number` > `opsCompletedAsyncUnref: number` > `opsCompletedSync: number` > `opsDispatched: number` > `opsDispatchedAsync: number` > `opsDispatchedAsyncUnref: number` > `opsDispatchedSync: number` deno Deno Deno ==== The global namespace where Deno specific, non-standard APIs are located. Namespace --------- | | | | --- | --- | | [Deno.errors](deno.errors) | A set of error constructors that are raised by Deno APIs. | Classes ------- | | | | --- | --- | | [Deno.Buffer](deno.buffer) deprecated | A variable-sized buffer of bytes with `read()` and `write()` methods. | | [Deno.errors.AddrInUse](deno.errors.addrinuse) | Raised when attempting to open a server listener on an address and port that already has a listener. | | [Deno.errors.AddrNotAvailable](deno.errors.addrnotavailable) | Raised when the underlying operating system reports an `EADDRNOTAVAIL` error. | | [Deno.errors.AlreadyExists](deno.errors.alreadyexists) | Raised when trying to create a resource, like a file, that already exits. | | [Deno.errors.BadResource](deno.errors.badresource) | The underlying IO resource is invalid or closed, and so the operation could not be performed. | | [Deno.errors.BrokenPipe](deno.errors.brokenpipe) | Raised when trying to write to a resource and a broken pipe error occurs. This can happen when trying to write directly to `stdout` or `stderr` and the operating system is unable to pipe the output for a reason external to the Deno runtime. | | [Deno.errors.Busy](deno.errors.busy) | Raised when the underlying IO resource is not available because it is being awaited on in another block of code. | | [Deno.errors.ConnectionAborted](deno.errors.connectionaborted) | Raised when the underlying operating system reports an `ECONNABORTED` error. | | [Deno.errors.ConnectionRefused](deno.errors.connectionrefused) | Raised when the underlying operating system reports that a connection to a resource is refused. | | [Deno.errors.ConnectionReset](deno.errors.connectionreset) | Raised when the underlying operating system reports that a connection has been reset. With network servers, it can be a *normal* occurrence where a client will abort a connection instead of properly shutting it down. | | [Deno.errors.Http](deno.errors.http) | Raised in situations where when attempting to load a dynamic import, too many redirects were encountered. | | [Deno.errors.Interrupted](deno.errors.interrupted) | Raised when the underlying operating system reports an `EINTR` error. In many cases, this underlying IO error will be handled internally within Deno, or result in an @{link BadResource} error instead. | | [Deno.errors.InvalidData](deno.errors.invaliddata) | Raised when an operation to returns data that is invalid for the operation being performed. | | [Deno.errors.NotConnected](deno.errors.notconnected) | Raised when the underlying operating system reports an `ENOTCONN` error. | | [Deno.errors.NotFound](deno.errors.notfound) | Raised when the underlying operating system indicates that the file was not found. | | [Deno.errors.NotSupported](deno.errors.notsupported) | Raised when the underlying Deno API is asked to perform a function that is not currently supported. | | [Deno.errors.PermissionDenied](deno.errors.permissiondenied) | Raised when the underlying operating system indicates the current user which the Deno process is running under does not have the appropriate permissions to a file or resource, or the user *did not* provide required `--allow-*` flag. | | [Deno.errors.TimedOut](deno.errors.timedout) | Raised when the underlying operating system reports that an I/O operation has timed out (`ETIMEDOUT`). | | [Deno.errors.UnexpectedEof](deno.errors.unexpectedeof) | Raised when attempting to read bytes from a resource, but the EOF was unexpectedly encountered. | | [Deno.errors.WriteZero](deno.errors.writezero) | Raised when expecting to write to a IO buffer resulted in zero bytes being written. | | [Deno.FsFile](deno.fsfile) | The Deno abstraction for reading and writing files. | | [Deno.Permissions](deno.permissions) | | | [Deno.PermissionStatus](deno.permissionstatus) | | | [Deno.Process](deno.process) | Represents an instance of a sub process that is returned from [`Deno.run`](deno#run) which can be used to manage the sub-process. | Enums ----- | | | | --- | --- | | [Deno.SeekMode](deno.seekmode) | A enum which defines the seek mode for IO related APIs that support seeking. | Variables --------- | | | | --- | --- | | [Deno.args](deno.args) | Returns the script arguments to the program. If for example we run a program: | | [Deno.build](deno.build) | Build related information. | | [Deno.customInspect](deno.custominspect) deprecated | A symbol which can be used as a key for a custom method which will be called when `Deno.inspect()` is called, or when the object is logged to the console. | | [Deno.env](deno.env) | An interface containing methods to interact with the process environment variables. | | [Deno.File](deno.file) deprecated | The Deno abstraction for reading and writing files. | | [Deno.mainModule](deno.mainmodule) | The URL of the entrypoint module entered from the command-line. | | [Deno.noColor](deno.nocolor) | Reflects the `NO_COLOR` environment variable at program start. | | [Deno.permissions](deno.permissions) | Deno's permission management API. | | [Deno.pid](deno.pid) | The current process ID of this instance of the Deno CLI. | | [Deno.ppid](deno.ppid) | The process ID of parent process of this instance of the Deno CLI. | | [Deno.stderr](deno.stderr) | A reference to `stderr` which can be used to write directly to `stderr`. It implements the Deno specific {*@link* `Writer`}, {*@link* `WriterSync`}, and {*@link* `Closer`} interfaces as well as provides a [`WritableStream`](writablestream) interface. | | [Deno.stdin](deno.stdin) | A reference to `stdin` which can be used to read directly from `stdin`. It implements the Deno specific {*@link* `Reader`}, {*@link* `ReaderSync`}, and {*@link* `Closer`} interfaces as well as provides a [`ReadableStream`](readablestream) interface. | | [Deno.stdout](deno.stdout) | A reference to `stdout` which can be used to write directly to `stdout`. It implements the Deno specific {*@link* `Writer`}, {*@link* `WriterSync`}, and {*@link* `Closer`} interfaces as well as provides a [`WritableStream`](writablestream) interface. | | [Deno.version](deno.version) | Version related information. | Functions --------- | | | | --- | --- | | [Deno.addSignalListener](deno.addsignallistener) | Registers the given function as a listener of the given signal event. | | [Deno.chdir](deno.chdir) | Change the current working directory to the specified path. | | [Deno.chmod](deno.chmod) | Changes the permission of a specific file/directory of specified path. Ignores the process's umask. | | [Deno.chmodSync](deno.chmodsync) | Synchronously changes the permission of a specific file/directory of specified path. Ignores the process's umask. | | [Deno.chown](deno.chown) | Change owner of a regular file or directory. | | [Deno.chownSync](deno.chownsync) | Synchronously change owner of a regular file or directory. | | [Deno.close](deno.close) | Close the given resource ID (`rid`) which has been previously opened, such as via opening or creating a file. Closing a file when you are finished with it is important to avoid leaking resources. | | [Deno.connect](deno.connect) | Connects to the hostname (default is "127.0.0.1") and port on the named transport (default is "tcp"), and resolves to the connection (`Conn`). | | [Deno.connectTls](deno.connecttls) | Establishes a secure connection over TLS (transport layer security) using an optional cert file, hostname (default is "127.0.0.1") and port. The cert file is optional and if not included Mozilla's root certificates will be used (see also <https://github.com/ctz/webpki-roots> for specifics) | | [Deno.copy](deno.copy) deprecated | Copies from `src` to `dst` until either EOF (`null`) is read from `src` or an error occurs. It resolves to the number of bytes copied or rejects with the first error encountered while copying. | | [Deno.copyFile](deno.copyfile) | Copies the contents and permissions of one file to another specified path, by default creating a new file if needed, else overwriting. Fails if target path is a directory or is unwritable. | | [Deno.copyFileSync](deno.copyfilesync) | Synchronously copies the contents and permissions of one file to another specified path, by default creating a new file if needed, else overwriting. Fails if target path is a directory or is unwritable. | | [Deno.create](deno.create) | Creates a file if none exists or truncates an existing file and resolves to an instance of [`Deno.FsFile`](deno#FsFile). | | [Deno.createSync](deno.createsync) | Creates a file if none exists or truncates an existing file and returns an instance of [`Deno.FsFile`](deno#FsFile). | | [Deno.cwd](deno.cwd) | Return a string representing the current working directory. | | [Deno.execPath](deno.execpath) | Returns the path to the current deno executable. | | [Deno.exit](deno.exit) | Exit the Deno process with optional exit code. | | [Deno.fdatasync](deno.fdatasync) | Flushes any pending data operations of the given file stream to disk. ``` const file = await Deno.open( "my_file.txt", { read: true, write: true, create: true }, ); await Deno.write(file.rid, new TextEncoder().encode("Hello World")); await Deno.fdatasync(file.rid); console.log(new TextDecoder().decode(await Deno.readFile("my_file.txt"))); // Hello World ``` | | [Deno.fdatasyncSync](deno.fdatasyncsync) | Synchronously flushes any pending data operations of the given file stream to disk. | | [Deno.fstat](deno.fstat) | Returns a `Deno.FileInfo` for the given file stream. | | [Deno.fstatSync](deno.fstatsync) | Synchronously returns a `Deno.FileInfo` for the given file stream. | | [Deno.fsync](deno.fsync) | Flushes any pending data and metadata operations of the given file stream to disk. | | [Deno.fsyncSync](deno.fsyncsync) | Synchronously flushes any pending data and metadata operations of the given file stream to disk. | | [Deno.ftruncate](deno.ftruncate) | Truncates or extends the specified file stream, to reach the specified `len`. | | [Deno.ftruncateSync](deno.ftruncatesync) | Synchronously truncates or extends the specified file stream, to reach the specified `len`. | | [Deno.hostname](deno.hostname) | Get the `hostname` of the machine the Deno process is running on. | | [Deno.inspect](deno.inspect) | Converts the input into a string that has the same format as printed by `console.log()`. | | [Deno.isatty](deno.isatty) | Check if a given resource id (`rid`) is a TTY (a terminal). | | [Deno.iter](deno.iter) deprecated | Turns a Reader, `r`, into an async iterator. | | [Deno.iterSync](deno.itersync) deprecated | Turns a ReaderSync, `r`, into an iterator. | | [Deno.kill](deno.kill) | Send a signal to process under given `pid`. | | [Deno.link](deno.link) | Creates `newpath` as a hard link to `oldpath`. | | [Deno.linkSync](deno.linksync) | Synchronously creates `newpath` as a hard link to `oldpath`. | | [Deno.listen](deno.listen) | Listen announces on the local transport address. | | [Deno.listenTls](deno.listentls) | Listen announces on the local transport address over TLS (transport layer security). | | [Deno.lstat](deno.lstat) | Resolves to a [`Deno.FileInfo`](deno#FileInfo) for the specified `path`. If `path` is a symlink, information for the symlink will be returned instead of what it points to. | | [Deno.lstatSync](deno.lstatsync) | Synchronously returns a [`Deno.FileInfo`](deno#FileInfo) for the specified `path`. If `path` is a symlink, information for the symlink will be returned instead of what it points to. | | [Deno.makeTempDir](deno.maketempdir) | Creates a new temporary directory in the default directory for temporary files, unless `dir` is specified. Other optional options include prefixing and suffixing the directory name with `prefix` and `suffix` respectively. | | [Deno.makeTempDirSync](deno.maketempdirsync) | Synchronously creates a new temporary directory in the default directory for temporary files, unless `dir` is specified. Other optional options include prefixing and suffixing the directory name with `prefix` and `suffix` respectively. | | [Deno.makeTempFile](deno.maketempfile) | Creates a new temporary file in the default directory for temporary files, unless `dir` is specified. | | [Deno.makeTempFileSync](deno.maketempfilesync) | Synchronously creates a new temporary file in the default directory for temporary files, unless `dir` is specified. | | [Deno.memoryUsage](deno.memoryusage) | Returns an object describing the memory usage of the Deno process and the V8 subsystem measured in bytes. | | [Deno.metrics](deno.metrics) | Receive metrics from the privileged side of Deno. This is primarily used in the development of Deno. *Ops*, also called *bindings*, are the go-between between Deno JavaScript sandbox and the rest of Deno. | | [Deno.mkdir](deno.mkdir) | Creates a new directory with the specified path. | | [Deno.mkdirSync](deno.mkdirsync) | Synchronously creates a new directory with the specified path. | | [Deno.open](deno.open) | Open a file and resolve to an instance of [`Deno.FsFile`](deno#FsFile). The file does not need to previously exist if using the `create` or `createNew` open options. It is the caller's responsibility to close the file when finished with it. | | [Deno.openSync](deno.opensync) | Synchronously open a file and return an instance of [`Deno.FsFile`](deno#FsFile). The file does not need to previously exist if using the `create` or `createNew` open options. It is the caller's responsibility to close the file when finished with it. | | [Deno.read](deno.read) | Read from a resource ID (`rid`) into an array buffer (`buffer`). | | [Deno.readAll](deno.readall) deprecated | Read Reader `r` until EOF (`null`) and resolve to the content as Uint8Array`. | | [Deno.readAllSync](deno.readallsync) deprecated | Synchronously reads Reader `r` until EOF (`null`) and returns the content as `Uint8Array`. | | [Deno.readDir](deno.readdir) | Reads the directory given by `path` and returns an async iterable of [`Deno.DirEntry`](deno#DirEntry). | | [Deno.readDirSync](deno.readdirsync) | Synchronously reads the directory given by `path` and returns an iterable of `Deno.DirEntry`. | | [Deno.readFile](deno.readfile) | Reads and resolves to the entire contents of a file as an array of bytes. `TextDecoder` can be used to transform the bytes to string if required. Reading a directory returns an empty data array. | | [Deno.readFileSync](deno.readfilesync) | Synchronously reads and returns the entire contents of a file as an array of bytes. `TextDecoder` can be used to transform the bytes to string if required. Reading a directory returns an empty data array. | | [Deno.readLink](deno.readlink) | Resolves to the full path destination of the named symbolic link. | | [Deno.readLinkSync](deno.readlinksync) | Synchronously returns the full path destination of the named symbolic link. | | [Deno.readSync](deno.readsync) | Synchronously read from a resource ID (`rid`) into an array buffer (`buffer`). | | [Deno.readTextFile](deno.readtextfile) | Asynchronously reads and returns the entire contents of a file as an UTF-8 decoded string. Reading a directory throws an error. | | [Deno.readTextFileSync](deno.readtextfilesync) | Synchronously reads and returns the entire contents of a file as an UTF-8 decoded string. Reading a directory throws an error. | | [Deno.realPath](deno.realpath) | Resolves to the absolute normalized path, with symbolic links resolved. | | [Deno.realPathSync](deno.realpathsync) | Synchronously returns absolute normalized path, with symbolic links resolved. | | [Deno.refTimer](deno.reftimer) | Make the timer of the given `id` block the event loop from finishing. | | [Deno.remove](deno.remove) | Removes the named file or directory. | | [Deno.removeSignalListener](deno.removesignallistener) | Removes the given signal listener that has been registered with [`Deno.addSignalListener`](deno#addSignalListener). | | [Deno.removeSync](deno.removesync) | Synchronously removes the named file or directory. | | [Deno.rename](deno.rename) | Renames (moves) `oldpath` to `newpath`. Paths may be files or directories. If `newpath` already exists and is not a directory, `rename()` replaces it. OS-specific restrictions may apply when `oldpath` and `newpath` are in different directories. | | [Deno.renameSync](deno.renamesync) | Synchronously renames (moves) `oldpath` to `newpath`. Paths may be files or directories. If `newpath` already exists and is not a directory, `renameSync()` replaces it. OS-specific restrictions may apply when `oldpath` and `newpath` are in different directories. | | [Deno.resolveDns](deno.resolvedns) | | | [Deno.resources](deno.resources) | Returns a map of open resource IDs (*rid*) along with their string representations. This is an internal API and as such resource representation has `unknown` type; that means it can change any time and should not be depended upon. | | [Deno.run](deno.run) | Spawns new subprocess. RunOptions must contain at a minimum the `opt.cmd`, an array of program arguments, the first of which is the binary. | | [Deno.seek](deno.seek) | Seek a resource ID (`rid`) to the given `offset` under mode given by `whence`. The call resolves to the new position within the resource (bytes from the start). | | [Deno.seekSync](deno.seeksync) | Synchronously seek a resource ID (`rid`) to the given `offset` under mode given by `whence`. The new position within the resource (bytes from the start) is returned. | | [Deno.serveHttp](deno.servehttp) | Services HTTP requests given a TCP or TLS socket. | | [Deno.shutdown](deno.shutdown) | Shutdown socket send operations. | | [Deno.startTls](deno.starttls) | Start TLS handshake from an existing connection using an optional list of CA certificates, and hostname (default is "127.0.0.1"). Specifying CA certs is optional. By default the configured root certificates are used. Using this function requires that the other end of the connection is prepared for a TLS handshake. | | [Deno.stat](deno.stat) | Resolves to a [`Deno.FileInfo`](deno#FileInfo) for the specified `path`. Will always follow symlinks. | | [Deno.statSync](deno.statsync) | Synchronously returns a [`Deno.FileInfo`](deno#FileInfo) for the specified `path`. Will always follow symlinks. | | [Deno.symlink](deno.symlink) | Creates `newpath` as a symbolic link to `oldpath`. | | [Deno.symlinkSync](deno.symlinksync) | Creates `newpath` as a symbolic link to `oldpath`. | | [Deno.test](deno.test) | Register a test which will be run when `deno test` is used on the command line and the containing module looks like a test module. | | [Deno.truncate](deno.truncate) | Truncates (or extends) the specified file, to reach the specified `len`. If `len` is not specified then the entire file contents are truncated. | | [Deno.truncateSync](deno.truncatesync) | Synchronously truncates (or extends) the specified file, to reach the specified `len`. If `len` is not specified then the entire file contents are truncated. | | [Deno.unrefTimer](deno.unreftimer) | Make the timer of the given `id` not block the event loop from finishing. | | [Deno.upgradeWebSocket](deno.upgradewebsocket) | Used to upgrade an incoming HTTP request to a WebSocket. | | [Deno.watchFs](deno.watchfs) | Watch for file system events against one or more `paths`, which can be files or directories. These paths must exist already. One user action (e.g. `touch test.file`) can generate multiple file system events. Likewise, one user action can result in multiple file paths in one event (e.g. `mv old_name.txt new_name.txt`). | | [Deno.write](deno.write) | Write to the resource ID (`rid`) the contents of the array buffer (`data`). | | [Deno.writeAll](deno.writeall) deprecated | Write all the content of the array buffer (`arr`) to the writer (`w`). | | [Deno.writeAllSync](deno.writeallsync) deprecated | Synchronously write all the content of the array buffer (`arr`) to the writer (`w`). | | [Deno.writeFile](deno.writefile) | Write `data` to the given `path`, by default creating a new file if needed, else overwriting. | | [Deno.writeFileSync](deno.writefilesync) | Synchronously write `data` to the given `path`, by default creating a new file if needed, else overwriting. | | [Deno.writeSync](deno.writesync) | Synchronously write to the resource ID (`rid`) the contents of the array buffer (`data`). | | [Deno.writeTextFile](deno.writetextfile) | Write string `data` to the given `path`, by default creating a new file if needed, else overwriting. | | [Deno.writeTextFileSync](deno.writetextfilesync) | Synchronously write string `data` to the given `path`, by default creating a new file if needed, else overwriting. | Interfaces ---------- | | | | --- | --- | | [Deno.CAARecord](deno.caarecord) | If `resolveDns` is called with "CAA" record type specified, it will return an array of this interface. | | [Deno.Closer](deno.closer) | An abstract interface which when implemented provides an interface to close files/resources that were previously opened. | | [Deno.Conn](deno.conn) | | | [Deno.ConnectOptions](deno.connectoptions) | | | [Deno.ConnectTlsOptions](deno.connecttlsoptions) | | | [Deno.DirEntry](deno.direntry) | Information about a directory entry returned from [`Deno.readDir`](deno#readDir) and [`Deno.readDirSync`](deno#readDirSync). | | [Deno.Env](deno.env) | An interface containing methods to interact with the process environment variables. | | [Deno.EnvPermissionDescriptor](deno.envpermissiondescriptor) | | | [Deno.FfiPermissionDescriptor](deno.ffipermissiondescriptor) | | | [Deno.FileInfo](deno.fileinfo) | Provides information about a file and is returned by [`Deno.stat`](deno#stat), [`Deno.lstat`](deno#lstat), [`Deno.statSync`](deno#statSync), and [`Deno.lstatSync`](deno#lstatSync) or from calling `stat()` and `statSync()` on an [`Deno.FsFile`](deno#FsFile) instance. | | [Deno.FsEvent](deno.fsevent) | Represents a unique file system event yielded by a [`Deno.FsWatcher`](deno#FsWatcher). | | [Deno.FsWatcher](deno.fswatcher) | Returned by [`Deno.watchFs`](deno#watchFs). It is an async iterator yielding up system events. To stop watching the file system by calling `.close()` method. | | [Deno.HrtimePermissionDescriptor](deno.hrtimepermissiondescriptor) | | | [Deno.HttpConn](deno.httpconn) | | | [Deno.InspectOptions](deno.inspectoptions) | Option which can be specified when performing [`Deno.inspect`](deno#inspect). | | [Deno.Listener](deno.listener) | A generic network listener for stream-oriented protocols. | | [Deno.ListenOptions](deno.listenoptions) | | | [Deno.ListenTlsOptions](deno.listentlsoptions) | | | [Deno.MakeTempOptions](deno.maketempoptions) | Options which can be set when using [`Deno.makeTempDir`](deno#makeTempDir), [`Deno.makeTempDirSync`](deno#makeTempDirSync), [`Deno.makeTempFile`](deno#makeTempFile), and [`Deno.makeTempFileSync`](deno#makeTempFileSync). | | [Deno.MemoryUsage](deno.memoryusage) | | | [Deno.Metrics](deno.metrics) | | | [Deno.MkdirOptions](deno.mkdiroptions) | Options which can be set when using [`Deno.mkdir`](deno#mkdir) and [`Deno.mkdirSync`](deno#mkdirSync). | | [Deno.MXRecord](deno.mxrecord) | If `resolveDns` is called with "MX" record type specified, it will return an array of this interface. | | [Deno.NAPTRRecord](deno.naptrrecord) | If `resolveDns` is called with "NAPTR" record type specified, it will return an array of this interface. | | [Deno.NetAddr](deno.netaddr) | | | [Deno.NetPermissionDescriptor](deno.netpermissiondescriptor) | | | [Deno.OpenOptions](deno.openoptions) | Options which can be set when doing [`Deno.open`](deno#open) and [`Deno.openSync`](deno#openSync). | | [Deno.OpMetrics](deno.opmetrics) | | | [Deno.PermissionOptionsObject](deno.permissionoptionsobject) | A set of options which can define the permissions within a test or worker context at a highly specific level. | | [Deno.PermissionStatusEventMap](deno.permissionstatuseventmap) | | | [Deno.Reader](deno.reader) | An abstract interface which when implemented provides an interface to read bytes into an array buffer asynchronously. | | [Deno.ReaderSync](deno.readersync) | An abstract interface which when implemented provides an interface to read bytes into an array buffer synchronously. | | [Deno.ReadFileOptions](deno.readfileoptions) | Options which can be set when using [`Deno.readFile`](deno#readFile) or [`Deno.readFileSync`](deno#readFileSync). | | [Deno.ReadPermissionDescriptor](deno.readpermissiondescriptor) | | | [Deno.RemoveOptions](deno.removeoptions) | Options which can be set when using [`Deno.remove`](deno#remove) and [`Deno.removeSync`](deno#removeSync). | | [Deno.RequestEvent](deno.requestevent) | | | [Deno.ResolveDnsOptions](deno.resolvednsoptions) | | | [Deno.ResourceMap](deno.resourcemap) | A map of open resources that Deno is tracking. The key is the resource ID (*rid*) and the value is its representation. | | [Deno.RunOptions](deno.runoptions) | Options which can be used with [`Deno.run`](deno#run). | | [Deno.RunPermissionDescriptor](deno.runpermissiondescriptor) | | | [Deno.Seeker](deno.seeker) | An abstract interface which when implemented provides an interface to seek within an open file/resource asynchronously. | | [Deno.SeekerSync](deno.seekersync) | An abstract interface which when implemented provides an interface to seek within an open file/resource synchronously. | | [Deno.SetRawOptions](deno.setrawoptions) | **UNSTABLE**: new API, yet to be vetted. | | [Deno.SOARecord](deno.soarecord) | If `resolveDns` is called with "SOA" record type specified, it will return an array of this interface. | | [Deno.SRVRecord](deno.srvrecord) | If `resolveDns` is called with "SRV" record type specified, it will return an array of this interface. | | [Deno.StartTlsOptions](deno.starttlsoptions) | | | [Deno.SysPermissionDescriptor](deno.syspermissiondescriptor) | | | [Deno.TcpConn](deno.tcpconn) | | | [Deno.TestContext](deno.testcontext) | Context that is passed to a testing function, which can be used to either gain information about the current test, or register additional test steps within the current test. | | [Deno.TestDefinition](deno.testdefinition) | | | [Deno.TestStepDefinition](deno.teststepdefinition) | | | [Deno.TlsConn](deno.tlsconn) | | | [Deno.TlsHandshakeInfo](deno.tlshandshakeinfo) | | | [Deno.TlsListener](deno.tlslistener) | Specialized listener that accepts TLS connections. | | [Deno.UnixAddr](deno.unixaddr) | | | [Deno.UnixConn](deno.unixconn) | | | [Deno.UpgradeWebSocketOptions](deno.upgradewebsocketoptions) | | | [Deno.WebSocketUpgrade](deno.websocketupgrade) | | | [Deno.WriteFileOptions](deno.writefileoptions) | Options for writing to a file. | | [Deno.WritePermissionDescriptor](deno.writepermissiondescriptor) | | | [Deno.Writer](deno.writer) | An abstract interface which when implemented provides an interface to write bytes from an array buffer to a file/resource asynchronously. | | [Deno.WriterSync](deno.writersync) | An abstract interface which when implemented provides an interface to write bytes from an array buffer to a file/resource synchronously. | Type Aliases ------------ | | | | --- | --- | | [Deno.Addr](deno.addr) | | | [Deno.FsEventFlag](deno.fseventflag) | Additional information for FsEvent objects with the "other" kind. | | [Deno.PermissionDescriptor](deno.permissiondescriptor) | Permission descriptors which define a permission and can be queried, requested, or revoked. | | [Deno.PermissionName](deno.permissionname) | The name of a privileged feature which needs permission. | | [Deno.PermissionOptions](deno.permissionoptions) | Options which define the permissions within a test or worker context. | | [Deno.PermissionState](deno.permissionstate) | The current status of the permission: | | [Deno.ProcessStatus](deno.processstatus) | The status resolved from the `.status()` method of a [`Deno.Process`](deno#Process) instance. | | [Deno.RecordType](deno.recordtype) | The type of the resource record. Only the listed types are supported currently. | | [Deno.Signal](deno.signal) | Operating signals which can be listened for or sent to sub-processes. What signals and what their standard behaviors are are OS dependent. | | [Deno.SymlinkOptions](deno.symlinkoptions) | |
programming_docs
deno Deno.ppid Deno.ppid ========= The process ID of parent process of this instance of the Deno CLI. ``` console.log(Deno.ppid); ``` ``` const ppid: number; ``` deno DomIterable DomIterable =========== ``` interface DomIterable <K, V> {[[Symbol.iterator]](): IterableIterator<[K, V]>; entries(): IterableIterator<[K, V]>; forEach(callback: ( value: V, key: K, parent: this,) => void, thisArg?: any): void; keys(): IterableIterator<K>; values(): IterableIterator<V>; } ``` Type Parameters --------------- > `K` > `V` Methods ------- > `[[Symbol.iterator]](): IterableIterator<[K, V]>` > `entries(): IterableIterator<[K, V]>` > `forEach(callback: ( > value: V, > > key: K, > > parent: this,) => void, thisArg?: any): void` > `keys(): IterableIterator<K>` > `values(): IterableIterator<V>` deno Deno.chownSync Deno.chownSync ============== Synchronously change owner of a regular file or directory. This functionality is not available on Windows. ``` Deno.chownSync("myFile.txt", 1000, 1002); ``` Requires `allow-write` permission. Throws Error (not implemented) if executed on Windows. ``` function chownSync( path: string | [URL](url), uid: number | null, gid: number | null,): void; ``` > `chownSync(path: string | [URL](url), uid: number | null, gid: number | null): void` ### Parameters > `path: string | [URL](url)` path to the file > `uid: number | null` user id (UID) of the new owner, or `null` for no change > `gid: number | null` group id (GID) of the new owner, or `null` for no change ### Return Type > `void` deno Deno.errors.AddrInUse Deno.errors.AddrInUse ===================== Raised when attempting to open a server listener on an address and port that already has a listener. ``` class AddrInUse extends Error { } ``` Extends ------- > `Error` deno WorkerEventMap WorkerEventMap ============== ``` interface WorkerEventMap extends [AbstractWorkerEventMap](abstractworkereventmap) {message: [MessageEvent](messageevent); messageerror: [MessageEvent](messageevent); } ``` Extends ------- > `[AbstractWorkerEventMap](abstractworkereventmap)` Properties ---------- > `message: [MessageEvent](messageevent)` > `messageerror: [MessageEvent](messageevent)` deno PerformanceMark PerformanceMark =============== `PerformanceMark` is an abstract interface for `PerformanceEntry` objects with an entryType of `"mark"`. Entries of this type are created by calling `performance.mark()` to add a named `DOMHighResTimeStamp` (the mark) to the performance timeline. ``` class PerformanceMark extends PerformanceEntry { constructor(name: string, options?: [PerformanceMarkOptions](performancemarkoptions)); readonly detail: any; readonly entryType: "mark"; } ``` Extends ------- > `PerformanceEntry` Constructors ------------ > `new PerformanceMark(name: string, options?: [PerformanceMarkOptions](performancemarkoptions))` Properties ---------- > `detail: any` > `entryType: "mark"` deno File File ==== Provides information about files and allows JavaScript in a web page to access their content. ``` class File extends Blob { constructor( fileBits: [BlobPart](blobpart)[], fileName: string, options?: [FilePropertyBag](filepropertybag),); readonly lastModified: number; readonly name: string; } ``` Extends ------- > `Blob` Constructors ------------ > `new File(fileBits: [BlobPart](blobpart)[], fileName: string, options?: [FilePropertyBag](filepropertybag))` Properties ---------- > `lastModified: number` > `name: string` deno GPURenderPipelineDescriptor GPURenderPipelineDescriptor =========================== ``` interface GPURenderPipelineDescriptor extends [GPUPipelineDescriptorBase](gpupipelinedescriptorbase) {depthStencil?: [GPUDepthStencilState](gpudepthstencilstate); fragment?: [GPUFragmentState](gpufragmentstate); multisample?: [GPUMultisampleState](gpumultisamplestate); primitive?: [GPUPrimitiveState](gpuprimitivestate); vertex: [GPUVertexState](gpuvertexstate); } ``` Extends ------- > `[GPUPipelineDescriptorBase](gpupipelinedescriptorbase)` Properties ---------- > `depthStencil?: [GPUDepthStencilState](gpudepthstencilstate)` > `fragment?: [GPUFragmentState](gpufragmentstate)` > `multisample?: [GPUMultisampleState](gpumultisamplestate)` > `primitive?: [GPUPrimitiveState](gpuprimitivestate)` > `vertex: [GPUVertexState](gpuvertexstate)` deno WebAssembly.ValueType WebAssembly.ValueType ===================== ``` type ValueType = | "f32" | "f64" | "i32" | "i64"; ``` Type ---- > `"f32" | "f64" | "i32" | "i64"` deno Deno.ListenOptions Deno.ListenOptions ================== ``` interface ListenOptions {hostname?: string; port: number; } ``` Properties ---------- > `hostname?: string` A literal IP address or host name that can be resolved to an IP address. If not specified, defaults to `0.0.0.0`. **Note about `0.0.0.0`** While listening `0.0.0.0` works on all platforms, the browsers on Windows don't work with the address `0.0.0.0`. You should show the message like `server running on localhost:8080` instead of `server running on 0.0.0.0:8080` if your program supports Windows. > `port: number` The port to listen on. deno GPUMipmapFilterMode GPUMipmapFilterMode =================== ``` type GPUMipmapFilterMode = "nearest" | "linear"; ``` Type ---- > `"nearest" | "linear"` deno Deno.Process Deno.Process ============ Represents an instance of a sub process that is returned from [`Deno.run`](deno#run) which can be used to manage the sub-process. ``` class Process<T extends [RunOptions](deno.runoptions) = [RunOptions](deno.runoptions)> { readonly pid: number; readonly rid: number; readonly stderr: T["stderr"] extends "piped" ? [Reader](deno.reader) & [Closer](deno.closer) & {readable: [ReadableStream](readablestream)<Uint8Array>; } : ([Reader](deno.reader) & [Closer](deno.closer) & {readable: [ReadableStream](readablestream)<Uint8Array>; }) | null; readonly stdin: T["stdin"] extends "piped" ? [Writer](deno.writer) & [Closer](deno.closer) & {writable: [WritableStream](writablestream)<Uint8Array>; } : ([Writer](deno.writer) & [Closer](deno.closer) & {writable: [WritableStream](writablestream)<Uint8Array>; }) | null; readonly stdout: T["stdout"] extends "piped" ? [Reader](deno.reader) & [Closer](deno.closer) & {readable: [ReadableStream](readablestream)<Uint8Array>; } : ([Reader](deno.reader) & [Closer](deno.closer) & {readable: [ReadableStream](readablestream)<Uint8Array>; }) | null; close(): void; kill(signo: [Signal](deno.signal)): void; output(): Promise<Uint8Array>; status(): Promise<[ProcessStatus](deno.processstatus)>; stderrOutput(): Promise<Uint8Array>; } ``` Type Parameters --------------- > `T extends [RunOptions](deno.runoptions) = [RunOptions](deno.runoptions)` Properties ---------- > `pid: number` The operating system's process ID for the sub-process. > `rid: number` The resource ID of the sub-process. > `stderr: T["stderr"] extends "piped" ? [Reader](deno.reader) & [Closer](deno.closer) & {readable: [ReadableStream](readablestream)<Uint8Array>; } : ([Reader](deno.reader) & [Closer](deno.closer) & {readable: [ReadableStream](readablestream)<Uint8Array>; }) | null` A reference to the sub-processes `stderr`, which allows interacting with the sub-process at a low level. > `stdin: T["stdin"] extends "piped" ? [Writer](deno.writer) & [Closer](deno.closer) & {writable: [WritableStream](writablestream)<Uint8Array>; } : ([Writer](deno.writer) & [Closer](deno.closer) & {writable: [WritableStream](writablestream)<Uint8Array>; }) | null` A reference to the sub-processes `stdin`, which allows interacting with the sub-process at a low level. > `stdout: T["stdout"] extends "piped" ? [Reader](deno.reader) & [Closer](deno.closer) & {readable: [ReadableStream](readablestream)<Uint8Array>; } : ([Reader](deno.reader) & [Closer](deno.closer) & {readable: [ReadableStream](readablestream)<Uint8Array>; }) | null` A reference to the sub-processes `stdout`, which allows interacting with the sub-process at a low level. Methods ------- > `close(): void` Clean up resources associated with the sub-process instance. > `kill(signo: [Signal](deno.signal)): void` Send a signal to process. ``` const p = Deno.run({ cmd: [ "sleep", "20" ]}); p.kill("SIGTERM"); p.close(); ``` > `output(): Promise<Uint8Array>` Buffer the stdout until EOF and return it as `Uint8Array`. You must set `stdout` to `"piped"` when creating the process. This calls `close()` on stdout after its done. > `status(): Promise<[ProcessStatus](deno.processstatus)>` Wait for the process to exit and return its exit status. Calling this function multiple times will return the same status. The `stdin` reference to the process will be closed before waiting to avoid a deadlock. If `stdout` and/or `stderr` were set to `"piped"`, they must be closed manually before the process can exit. To run process to completion and collect output from both `stdout` and `stderr` use: ``` const p = Deno.run({ cmd: [ "echo", "hello world" ], stderr: 'piped', stdout: 'piped' }); const [status, stdout, stderr] = await Promise.all([ p.status(), p.output(), p.stderrOutput() ]); p.close(); ``` > `stderrOutput(): Promise<Uint8Array>` Buffer the stderr until EOF and return it as `Uint8Array`. You must set `stderr` to `"piped"` when creating the process. This calls `close()` on stderr after its done. deno Deno.HttpConn Deno.HttpConn ============= ``` interface HttpConn extends AsyncIterable<[RequestEvent](deno.requestevent)> { readonly rid: number; close(): void; nextRequest(): Promise<[RequestEvent](deno.requestevent) | null>; } ``` Extends ------- > `AsyncIterable<[RequestEvent](deno.requestevent)>` Properties ---------- > `readonly rid: number` Methods ------- > `close(): void` > `nextRequest(): Promise<[RequestEvent](deno.requestevent) | null>` deno WebAssembly.ImportValue WebAssembly.ImportValue ======================= ``` type ImportValue = [ExportValue](webassembly.exportvalue) | number; ``` Type ---- > `[ExportValue](webassembly.exportvalue) | number` deno GPUQuerySetDescriptor GPUQuerySetDescriptor ===================== ``` interface GPUQuerySetDescriptor extends [GPUObjectDescriptorBase](gpuobjectdescriptorbase) {count: number; pipelineStatistics?: [GPUPipelineStatisticName](gpupipelinestatisticname)[]; type: [GPUQueryType](gpuquerytype); } ``` Extends ------- > `[GPUObjectDescriptorBase](gpuobjectdescriptorbase)` Properties ---------- > `count: number` > `pipelineStatistics?: [GPUPipelineStatisticName](gpupipelinestatisticname)[]` > `type: [GPUQueryType](gpuquerytype)` deno MessageEventInit MessageEventInit ================ ``` interface MessageEventInit <T = any> extends [EventInit](eventinit) {data?: T; lastEventId?: string; origin?: string; } ``` Type Parameters --------------- > `T = any` Extends ------- > `[EventInit](eventinit)` Properties ---------- > `data?: T` > `lastEventId?: string` > `origin?: string` deno KeyType KeyType ======= ``` type KeyType = "private" | "public" | "secret"; ``` Type ---- > `"private" | "public" | "secret"` deno GPUVertexAttribute GPUVertexAttribute ================== ``` interface GPUVertexAttribute {format: [GPUVertexFormat](gpuvertexformat); offset: number; shaderLocation: number; } ``` Properties ---------- > `format: [GPUVertexFormat](gpuvertexformat)` > `offset: number` > `shaderLocation: number` deno Deno.NetPermissionDescriptor Deno.NetPermissionDescriptor ============================ ``` interface NetPermissionDescriptor {host?: string; name: "net"; } ``` Properties ---------- > `host?: string` Optional host string of the form `"<hostname>[:<port>]"`. Examples: ``` "github.com" "deno.land:8080" ``` > `name: "net"` deno Deno.errors.Interrupted Deno.errors.Interrupted ======================= Raised when the underlying operating system reports an `EINTR` error. In many cases, this underlying IO error will be handled internally within Deno, or result in an @{link BadResource} error instead. ``` class Interrupted extends Error { } ``` Extends ------- > `Error` deno GPUColorWrite GPUColorWrite ============= ``` class GPUColorWrite { static ALL: 15; static ALPHA: 8; static BLUE: 4; static GREEN: 2; static RED: 1; } ``` Static Properties ----------------- > `ALL: 15` > `ALPHA: 8` > `BLUE: 4` > `GREEN: 2` > `RED: 1` deno WebAssembly.instantiateStreaming WebAssembly.instantiateStreaming ================================ The `WebAssembly.instantiateStreaming()` function compiles and instantiates a WebAssembly module directly from a streamed underlying source. This is the most efficient, optimized way to load wasm code. [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WebAssembly/instantiateStreaming) ``` function instantiateStreaming(response: [Response](response) | PromiseLike<[Response](response)>, importObject?: [Imports](webassembly.imports)): Promise<[WebAssemblyInstantiatedSource](webassembly.webassemblyinstantiatedsource)>; ``` > `instantiateStreaming(response: [Response](response) | PromiseLike<[Response](response)>, importObject?: [Imports](webassembly.imports)): Promise<[WebAssemblyInstantiatedSource](webassembly.webassemblyinstantiatedsource)>` ### Parameters > `response: [Response](response) | PromiseLike<[Response](response)>` > `importObject?: [Imports](webassembly.imports) optional` ### Return Type > `Promise<[WebAssemblyInstantiatedSource](webassembly.webassemblyinstantiatedsource)>` deno RsaKeyAlgorithm RsaKeyAlgorithm =============== ``` interface RsaKeyAlgorithm extends [KeyAlgorithm](keyalgorithm) {modulusLength: number; publicExponent: Uint8Array; } ``` Extends ------- > `[KeyAlgorithm](keyalgorithm)` Properties ---------- > `modulusLength: number` > `publicExponent: Uint8Array` deno DOMStringList DOMStringList ============= ``` interface DOMStringList {[index: number]: string; readonly length: number; contains(string: string): boolean; item(index: number): string | null; } ``` Index Signatures ---------------- [index: number]: string Properties ---------- > `readonly length: number` Returns the number of strings in strings. Methods ------- > `contains(string: string): boolean` Returns true if strings contains string, and false otherwise. > `item(index: number): string | null` Returns the string with index index from strings. deno MessagePort MessagePort =========== The MessagePort interface of the Channel Messaging API represents one of the two ports of a MessageChannel, allowing messages to be sent from one port and listening out for them arriving at the other. ``` class MessagePort extends EventTarget { onmessage: ((this: [MessagePort](messageport), ev: [MessageEvent](messageevent)) => any) | null; onmessageerror: ((this: [MessagePort](messageport), ev: [MessageEvent](messageevent)) => any) | null; addEventListener<K extends keyof [MessagePortEventMap](messageporteventmap)>( type: K, listener: (this: [MessagePort](messageport), ev: [MessagePortEventMap](messageporteventmap)[K]) => any, options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void; addEventListener( type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void; close(): void; postMessage(message: any, transfer: [Transferable](transferable)[]): void; postMessage(message: any, options?: [StructuredSerializeOptions](structuredserializeoptions)): void; removeEventListener<K extends keyof [MessagePortEventMap](messageporteventmap)>( type: K, listener: (this: [MessagePort](messageport), ev: [MessagePortEventMap](messageporteventmap)[K]) => any, options?: boolean | [EventListenerOptions](eventlisteneroptions),): void; removeEventListener( type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [EventListenerOptions](eventlisteneroptions),): void; start(): void; } ``` Extends ------- > `EventTarget` Properties ---------- > `onmessage: ((this: [MessagePort](messageport), ev: [MessageEvent](messageevent)) => any) | null` > `onmessageerror: ((this: [MessagePort](messageport), ev: [MessageEvent](messageevent)) => any) | null` Methods ------- > `addEventListener<K extends keyof [MessagePortEventMap](messageporteventmap)>(type: K, listener: (this: [MessagePort](messageport), ev: [MessagePortEventMap](messageporteventmap)[K]) => any, options?: boolean | [AddEventListenerOptions](addeventlisteneroptions)): void` > `addEventListener(type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [AddEventListenerOptions](addeventlisteneroptions)): void` > `close(): void` Disconnects the port, so that it is no longer active. > `postMessage(message: any, transfer: [Transferable](transferable)[]): void` Posts a message through the channel. Objects listed in transfer are transferred, not just cloned, meaning that they are no longer usable on the sending side. Throws a "DataCloneError" DOMException if transfer contains duplicate objects or port, or if message could not be cloned. > `postMessage(message: any, options?: [StructuredSerializeOptions](structuredserializeoptions)): void` > `removeEventListener<K extends keyof [MessagePortEventMap](messageporteventmap)>(type: K, listener: (this: [MessagePort](messageport), ev: [MessagePortEventMap](messageporteventmap)[K]) => any, options?: boolean | [EventListenerOptions](eventlisteneroptions)): void` > `removeEventListener(type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [EventListenerOptions](eventlisteneroptions)): void` > `start(): void` Begins dispatching messages received on the port. This is implicitly called when assigning a value to `this.onmessage`. deno Deno.inspect Deno.inspect ============ Converts the input into a string that has the same format as printed by `console.log()`. ``` const obj = { a: 10, b: "hello", }; const objAsString = Deno.inspect(obj); // { a: 10, b: "hello" } console.log(obj); // prints same value as objAsString, e.g. { a: 10, b: "hello" } ``` A custom inspect functions can be registered on objects, via the symbol `Symbol.for("Deno.customInspect")`, to control and customize the output of `inspect()` or when using `console` logging: ``` class A { x = 10; y = "hello"; [Symbol.for("Deno.customInspect")]() { return `x=${this.x}, y=${this.y}`; } } const inStringFormat = Deno.inspect(new A()); // "x=10, y=hello" console.log(inStringFormat); // prints "x=10, y=hello" ``` A depth can be specified by using the `depth` option: ``` Deno.inspect({a: {b: {c: {d: 'hello'}}}}, {depth: 2}); // { a: { b: [Object] } } ``` ``` function inspect(value: unknown, options?: [InspectOptions](deno.inspectoptions)): string; ``` > `inspect(value: unknown, options?: [InspectOptions](deno.inspectoptions)): string` ### Parameters > `value: unknown` > `options?: [InspectOptions](deno.inspectoptions) optional` ### Return Type > `string` deno EcdsaParams EcdsaParams =========== ``` interface EcdsaParams extends [Algorithm](algorithm) {hash: [HashAlgorithmIdentifier](hashalgorithmidentifier); } ``` Extends ------- > `[Algorithm](algorithm)` Properties ---------- > `hash: [HashAlgorithmIdentifier](hashalgorithmidentifier)`
programming_docs
deno GPUTextureSampleType GPUTextureSampleType ==================== ``` type GPUTextureSampleType = | "float" | "unfilterable-float" | "depth" | "sint" | "uint"; ``` Type ---- > `"float" | "unfilterable-float" | "depth" | "sint" | "uint"` deno Deno.PermissionName Deno.PermissionName =================== The name of a privileged feature which needs permission. ``` type PermissionName = | "run" | "read" | "write" | "net" | "env" | "sys" | "ffi" | "hrtime"; ``` Type ---- > `"run" | "read" | "write" | "net" | "env" | "sys" | "ffi" | "hrtime"` deno EcKeyGenParams EcKeyGenParams ============== ``` interface EcKeyGenParams extends [Algorithm](algorithm) {namedCurve: [NamedCurve](namedcurve); } ``` Extends ------- > `[Algorithm](algorithm)` Properties ---------- > `namedCurve: [NamedCurve](namedcurve)` deno Deno.rename Deno.rename =========== Renames (moves) `oldpath` to `newpath`. Paths may be files or directories. If `newpath` already exists and is not a directory, `rename()` replaces it. OS-specific restrictions may apply when `oldpath` and `newpath` are in different directories. ``` await Deno.rename("old/path", "new/path"); ``` On Unix-like OSes, this operation does not follow symlinks at either path. It varies between platforms when the operation throws errors, and if so what they are. It's always an error to rename anything to a non-empty directory. Requires `allow-read` and `allow-write` permissions. ``` function rename(oldpath: string | [URL](url), newpath: string | [URL](url)): Promise<void>; ``` > `rename(oldpath: string | [URL](url), newpath: string | [URL](url)): Promise<void>` ### Parameters > `oldpath: string | [URL](url)` > `newpath: string | [URL](url)` ### Return Type > `Promise<void>` deno ReadableByteStreamControllerCallback ReadableByteStreamControllerCallback ==================================== ``` interface ReadableByteStreamControllerCallback {(controller: [ReadableByteStreamController](readablebytestreamcontroller)): void | PromiseLike<void>;} ``` Call Signatures --------------- > `(controller: [ReadableByteStreamController](readablebytestreamcontroller)): void | PromiseLike<void>` deno Deno.UnixConn Deno.UnixConn ============= ``` interface UnixConn extends [Conn](deno.conn) {} ``` Extends ------- > `[Conn](deno.conn)` deno GPURenderPassColorAttachment GPURenderPassColorAttachment ============================ ``` interface GPURenderPassColorAttachment {clearValue?: [GPUColor](gpucolor); loadOp: [GPULoadOp](gpuloadop); resolveTarget?: [GPUTextureView](gputextureview); storeOp: [GPUStoreOp](gpustoreop); view: [GPUTextureView](gputextureview); } ``` Properties ---------- > `clearValue?: [GPUColor](gpucolor)` > `loadOp: [GPULoadOp](gpuloadop)` > `resolveTarget?: [GPUTextureView](gputextureview)` > `storeOp: [GPUStoreOp](gpustoreop)` > `view: [GPUTextureView](gputextureview)` deno Deno.readLink Deno.readLink ============= Resolves to the full path destination of the named symbolic link. ``` await Deno.symlink("./test.txt", "./test\_link.txt"); const target = await Deno.readLink("./test\_link.txt"); // full path of ./test.txt ``` Throws TypeError if called with a hard link. Requires `allow-read` permission. ``` function readLink(path: string | [URL](url)): Promise<string>; ``` > `readLink(path: string | [URL](url)): Promise<string>` ### Parameters > `path: string | [URL](url)` ### Return Type > `Promise<string>` deno GPUPipelineDescriptorBase GPUPipelineDescriptorBase ========================= ``` interface GPUPipelineDescriptorBase extends [GPUObjectDescriptorBase](gpuobjectdescriptorbase) {layout: [GPUPipelineLayout](gpupipelinelayout) | [GPUAutoLayoutMode](gpuautolayoutmode); } ``` Extends ------- > `[GPUObjectDescriptorBase](gpuobjectdescriptorbase)` Properties ---------- > `layout: [GPUPipelineLayout](gpupipelinelayout) | [GPUAutoLayoutMode](gpuautolayoutmode)` deno WebSocket WebSocket ========= Provides the API for creating and managing a WebSocket connection to a server, as well as for sending and receiving data on the connection. If you are looking to create a WebSocket server, please take a look at `Deno.upgradeWebSocket()`. ``` class WebSocket extends EventTarget { constructor(url: string | [URL](url), protocols?: string | string[]); binaryType: [BinaryType](binarytype); readonly bufferedAmount: number; readonly CLOSED: number; readonly CLOSING: number; readonly CONNECTING: number; readonly extensions: string; onclose: ((this: [WebSocket](websocket), ev: [CloseEvent](closeevent)) => any) | null; onerror: ((this: [WebSocket](websocket), ev: [Event](event) | [ErrorEvent](errorevent)) => any) | null; onmessage: ((this: [WebSocket](websocket), ev: [MessageEvent](messageevent)) => any) | null; onopen: ((this: [WebSocket](websocket), ev: [Event](event)) => any) | null; readonly OPEN: number; readonly protocol: string; readonly readyState: number; readonly url: string; addEventListener<K extends keyof [WebSocketEventMap](websocketeventmap)>( type: K, listener: (this: [WebSocket](websocket), ev: [WebSocketEventMap](websocketeventmap)[K]) => any, options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void; addEventListener( type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void; close(code?: number, reason?: string): void; removeEventListener<K extends keyof [WebSocketEventMap](websocketeventmap)>( type: K, listener: (this: [WebSocket](websocket), ev: [WebSocketEventMap](websocketeventmap)[K]) => any, options?: boolean | [EventListenerOptions](eventlisteneroptions),): void; removeEventListener( type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [EventListenerOptions](eventlisteneroptions),): void; send(data: | string | ArrayBufferLike | [Blob](blob) | ArrayBufferView ): void; static readonly CLOSED: number; static readonly CLOSING: number; static readonly CONNECTING: number; static readonly OPEN: number; } ``` Extends ------- > `EventTarget` Constructors ------------ > `new WebSocket(url: string | [URL](url), protocols?: string | string[])` Properties ---------- > `binaryType: [BinaryType](binarytype)` Returns a string that indicates how binary data from the WebSocket object is exposed to scripts: Can be set, to change how binary data is returned. The default is "blob". > `bufferedAmount: number` Returns the number of bytes of application data (UTF-8 text and binary data) that have been queued using send() but not yet been transmitted to the network. If the WebSocket connection is closed, this attribute's value will only increase with each call to the send() method. (The number does not reset to zero once the connection closes.) > `CLOSED: number` > `CLOSING: number` > `CONNECTING: number` > `extensions: string` Returns the extensions selected by the server, if any. > `onclose: ((this: [WebSocket](websocket), ev: [CloseEvent](closeevent)) => any) | null` > `onerror: ((this: [WebSocket](websocket), ev: [Event](event) | [ErrorEvent](errorevent)) => any) | null` > `onmessage: ((this: [WebSocket](websocket), ev: [MessageEvent](messageevent)) => any) | null` > `onopen: ((this: [WebSocket](websocket), ev: [Event](event)) => any) | null` > `OPEN: number` > `protocol: string` Returns the subprotocol selected by the server, if any. It can be used in conjunction with the array form of the constructor's second argument to perform subprotocol negotiation. > `readyState: number` Returns the state of the WebSocket object's connection. It can have the values described below. > `url: string` Returns the URL that was used to establish the WebSocket connection. Methods ------- > `addEventListener<K extends keyof [WebSocketEventMap](websocketeventmap)>(type: K, listener: (this: [WebSocket](websocket), ev: [WebSocketEventMap](websocketeventmap)[K]) => any, options?: boolean | [AddEventListenerOptions](addeventlisteneroptions)): void` > `addEventListener(type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [AddEventListenerOptions](addeventlisteneroptions)): void` > `close(code?: number, reason?: string): void` Closes the WebSocket connection, optionally using code as the the WebSocket connection close code and reason as the the WebSocket connection close reason. > `removeEventListener<K extends keyof [WebSocketEventMap](websocketeventmap)>(type: K, listener: (this: [WebSocket](websocket), ev: [WebSocketEventMap](websocketeventmap)[K]) => any, options?: boolean | [EventListenerOptions](eventlisteneroptions)): void` > `removeEventListener(type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [EventListenerOptions](eventlisteneroptions)): void` > `send(data: string | ArrayBufferLike | [Blob](blob) | ArrayBufferView): void` Transmits data using the WebSocket connection. data can be a string, a Blob, an ArrayBuffer, or an ArrayBufferView. Static Properties ----------------- > `CLOSED: number` > `CLOSING: number` > `CONNECTING: number` > `OPEN: number` deno Deno.ftruncate Deno.ftruncate ============== Truncates or extends the specified file stream, to reach the specified `len`. If `len` is not specified then the entire file contents are truncated as if len was set to 0. If the file previously was larger than this new length, the extra data is lost. If the file previously was shorter, it is extended, and the extended part reads as null bytes ('\0'). ``` // truncate the entire file const file = await Deno.open("my\_file.txt", { read: true, write: true, create: true }); await Deno.ftruncate(file.rid); ``` ``` // truncate part of the file const file = await Deno.open("my\_file.txt", { read: true, write: true, create: true }); await Deno.write(file.rid, new TextEncoder().encode("Hello World")); await Deno.ftruncate(file.rid, 7); const data = new Uint8Array(32); await Deno.read(file.rid, data); console.log(new TextDecoder().decode(data)); // Hello W ``` ``` function ftruncate(rid: number, len?: number): Promise<void>; ``` > `ftruncate(rid: number, len?: number): Promise<void>` ### Parameters > `rid: number` > `len?: number optional` ### Return Type > `Promise<void>` deno GPUVertexStepMode GPUVertexStepMode ================= ``` type GPUVertexStepMode = "vertex" | "instance"; ``` Type ---- > `"vertex" | "instance"` deno Deno.DirEntry Deno.DirEntry ============= Information about a directory entry returned from [`Deno.readDir`](deno#readDir) and [`Deno.readDirSync`](deno#readDirSync). ``` interface DirEntry {isDirectory: boolean; isFile: boolean; isSymlink: boolean; name: string; } ``` Properties ---------- > `isDirectory: boolean` True if this is info for a regular directory. Mutually exclusive to `DirEntry.isFile` and `DirEntry.isSymlink`. > `isFile: boolean` True if this is info for a regular file. Mutually exclusive to `DirEntry.isDirectory` and `DirEntry.isSymlink`. > `isSymlink: boolean` True if this is info for a symlink. Mutually exclusive to `DirEntry.isFile` and `DirEntry.isDirectory`. > `name: string` The file name of the entry. It is just the entity name and does not include the full path. deno Deno.read Deno.read ========= Read from a resource ID (`rid`) into an array buffer (`buffer`). Resolves to either the number of bytes read during the operation or EOF (`null`) if there was nothing more to read. It is possible for a read to successfully return with `0` bytes. This does not indicate EOF. This function is one of the lowest level APIs and most users should not work with this directly, but rather use [`readAll()`](https://deno.land/std/streams/conversion.ts?s=readAll) from [`std/streams/conversion.ts`](https://deno.land/std/streams/conversion.ts) instead. **It is not guaranteed that the full buffer will be read in a single call.** ``` // if "/foo/bar.txt" contains the text "hello world": const file = await Deno.open("/foo/bar.txt"); const buf = new Uint8Array(100); const numberOfBytesRead = await Deno.read(file.rid, buf); // 11 bytes const text = new TextDecoder().decode(buf); // "hello world" Deno.close(file.rid); ``` ``` function read(rid: number, buffer: Uint8Array): Promise<number | null>; ``` > `read(rid: number, buffer: Uint8Array): Promise<number | null>` ### Parameters > `rid: number` > `buffer: Uint8Array` ### Return Type > `Promise<number | null>` deno prompt prompt ====== Shows the given message and waits for the user's input. Returns the user's input as string. If the default value is given and the user inputs the empty string, then it returns the given default value. If the default value is not given and the user inputs the empty string, it returns null. If the stdin is not interactive, it returns null. ``` function prompt(message?: string, defaultValue?: string): string | null; ``` > `prompt(message?: string, defaultValue?: string): string | null` ### Parameters > `message?: string optional` > `defaultValue?: string optional` ### Return Type > `string | null` deno GPUObjectBase GPUObjectBase ============= ``` interface GPUObjectBase {label: string; } ``` Properties ---------- > `label: string` deno Deno.lstatSync Deno.lstatSync ============== Synchronously returns a [`Deno.FileInfo`](deno#FileInfo) for the specified `path`. If `path` is a symlink, information for the symlink will be returned instead of what it points to. ``` import { assert } from "https://deno.land/std/testing/asserts.ts"; const fileInfo = Deno.lstatSync("hello.txt"); assert(fileInfo.isFile); ``` Requires `allow-read` permission. ``` function lstatSync(path: string | [URL](url)): [FileInfo](deno.fileinfo); ``` > `lstatSync(path: string | [URL](url)): [FileInfo](deno.fileinfo)` ### Parameters > `path: string | [URL](url)` ### Return Type > `[FileInfo](deno.fileinfo)` deno Deno.RunOptions Deno.RunOptions =============== Options which can be used with [`Deno.run`](deno#run). ``` interface RunOptions {cmd: readonly string[] | [string | [URL](url), ...string[]]; cwd?: string; env?: Record<string, string>; stderr?: | "inherit" | "piped" | "null" | number; stdin?: | "inherit" | "piped" | "null" | number; stdout?: | "inherit" | "piped" | "null" | number; } ``` Properties ---------- > `cmd: readonly string[] | [string | [URL](url), ...string[]]` Arguments to pass. *Note*: the first element needs to be a path to the executable that is being run. > `cwd?: string` The current working directory that should be used when running the sub-process. > `env?: Record<string, string>` Any environment variables to be set when running the sub-process. > `stderr?: "inherit" | "piped" | "null" | number` By default subprocess inherits `stderr` of parent process. To change this this option can be set to a resource ID (*rid*) of an open file, `"inherit"`, `"piped"`, or `"null"`: * *number*: the resource ID of an open file/resource. This allows you to write to a file. * `"inherit"`: The default if unspecified. The subprocess inherits from the parent. * `"piped"`: A new pipe should be arranged to connect the parent and child sub-process. * `"null"`: This stream will be ignored. This is the equivalent of attaching the stream to `/dev/null`. > `stdin?: "inherit" | "piped" | "null" | number` By default subprocess inherits `stdin` of parent process. To change this this option can be set to a resource ID (*rid*) of an open file, `"inherit"`, `"piped"`, or `"null"`: * *number*: the resource ID of an open file/resource. This allows you to read from a file. * `"inherit"`: The default if unspecified. The subprocess inherits from the parent. * `"piped"`: A new pipe should be arranged to connect the parent and child sub-process. * `"null"`: This stream will be ignored. This is the equivalent of attaching the stream to `/dev/null`. > `stdout?: "inherit" | "piped" | "null" | number` By default subprocess inherits `stdout` of parent process. To change this this option can be set to a resource ID (*rid*) of an open file, `"inherit"`, `"piped"`, or `"null"`: * *number*: the resource ID of an open file/resource. This allows you to write to a file. * `"inherit"`: The default if unspecified. The subprocess inherits from the parent. * `"piped"`: A new pipe should be arranged to connect the parent and child sub-process. * `"null"`: This stream will be ignored. This is the equivalent of attaching the stream to `/dev/null`. deno GPUBufferUsageFlags GPUBufferUsageFlags =================== ``` type GPUBufferUsageFlags = number; ``` Type ---- > `number` deno Deno.ConnectOptions Deno.ConnectOptions =================== ``` interface ConnectOptions {hostname?: string; port: number; transport?: "tcp"; } ``` Properties ---------- > `hostname?: string` A literal IP address or host name that can be resolved to an IP address. If not specified, defaults to `127.0.0.1`. > `port: number` The port to connect to. > `transport?: "tcp"` deno Deno.truncateSync Deno.truncateSync ================= Synchronously truncates (or extends) the specified file, to reach the specified `len`. If `len` is not specified then the entire file contents are truncated. ### Truncate the entire file ``` Deno.truncateSync("my\_file.txt"); ``` ### Truncate part of the file ``` const file = Deno.makeTempFileSync(); Deno.writeFileSync(file, new TextEncoder().encode("Hello World")); Deno.truncateSync(file, 7); const data = Deno.readFileSync(file); console.log(new TextDecoder().decode(data)); ``` Requires `allow-write` permission. ``` function truncateSync(name: string, len?: number): void; ``` > `truncateSync(name: string, len?: number): void` ### Parameters > `name: string` > `len?: number optional` ### Return Type > `void` deno Deno.cwd Deno.cwd ======== Return a string representing the current working directory. If the current directory can be reached via multiple paths (due to symbolic links), `cwd()` may return any one of them. ``` const currentWorkingDirectory = Deno.cwd(); ``` Throws [`Deno.errors.NotFound`](deno#errors_NotFound) if directory not available. Requires `allow-read` permission. ``` function cwd(): string; ``` > `cwd(): string` ### Return Type > `string` deno VoidFunction VoidFunction ============ ``` interface VoidFunction {(): void;} ``` Call Signatures --------------- > `(): void` deno Deno.FileInfo Deno.FileInfo ============= Provides information about a file and is returned by [`Deno.stat`](deno#stat), [`Deno.lstat`](deno#lstat), [`Deno.statSync`](deno#statSync), and [`Deno.lstatSync`](deno#lstatSync) or from calling `stat()` and `statSync()` on an [`Deno.FsFile`](deno#FsFile) instance. ``` interface FileInfo {atime: Date | null; birthtime: Date | null; blksize: number | null; blocks: number | null; dev: number | null; gid: number | null; ino: number | null; isDirectory: boolean; isFile: boolean; isSymlink: boolean; mode: number | null; mtime: Date | null; nlink: number | null; rdev: number | null; size: number; uid: number | null; } ``` Properties ---------- > `atime: Date | null` The last access time of the file. This corresponds to the `atime` field from `stat` on Unix and `ftLastAccessTime` on Windows. This may not be available on all platforms. > `birthtime: Date | null` The creation time of the file. This corresponds to the `birthtime` field from `stat` on Mac/BSD and `ftCreationTime` on Windows. This may not be available on all platforms. > `blksize: number | null` Blocksize for filesystem I/O. *Linux/Mac OS only.* > `blocks: number | null` Number of blocks allocated to the file, in 512-byte units. *Linux/Mac OS only.* > `dev: number | null` ID of the device containing the file. *Linux/Mac OS only.* > `gid: number | null` Group ID of the owner of this file. *Linux/Mac OS only.* > `ino: number | null` Inode number. *Linux/Mac OS only.* > `isDirectory: boolean` True if this is info for a regular directory. Mutually exclusive to `FileInfo.isFile` and `FileInfo.isSymlink`. > `isFile: boolean` True if this is info for a regular file. Mutually exclusive to `FileInfo.isDirectory` and `FileInfo.isSymlink`. > `isSymlink: boolean` True if this is info for a symlink. Mutually exclusive to `FileInfo.isFile` and `FileInfo.isDirectory`. > `mode: number | null` **UNSTABLE**: Match behavior with Go on Windows for `mode`. The underlying raw `st_mode` bits that contain the standard Unix permissions for this file/directory. > `mtime: Date | null` The last modification time of the file. This corresponds to the `mtime` field from `stat` on Linux/Mac OS and `ftLastWriteTime` on Windows. This may not be available on all platforms. > `nlink: number | null` Number of hard links pointing to this file. *Linux/Mac OS only.* > `rdev: number | null` Device ID of this file. *Linux/Mac OS only.* > `size: number` The size of the file, in bytes. > `uid: number | null` User ID of the owner of this file. *Linux/Mac OS only.*
programming_docs
deno CompressionStream CompressionStream ================= An API for compressing a stream of data. @example ``` await Deno.stdin.readable .pipeThrough(new CompressionStream("gzip")) .pipeTo(Deno.stdout.writable); ``` ``` class CompressionStream { constructor(format: string); readonly readable: [ReadableStream](readablestream)<Uint8Array>; readonly writable: [WritableStream](writablestream)<Uint8Array>; } ``` Constructors ------------ > `new CompressionStream(format: string)` Creates a new `CompressionStream` object which compresses a stream of data. Throws a `TypeError` if the format passed to the constructor is not supported. Properties ---------- > `readable: [ReadableStream](readablestream)<Uint8Array>` > `writable: [WritableStream](writablestream)<Uint8Array>` deno Deno.UpgradeWebSocketOptions Deno.UpgradeWebSocketOptions ============================ ``` interface UpgradeWebSocketOptions {idleTimeout?: number; protocol?: string; } ``` Properties ---------- > `idleTimeout?: number` If the client does not respond to this frame with a `pong` within the timeout specified, the connection is deemed unhealthy and is closed. The `close` and `error` event will be emitted. The default is 120 seconds. Set to 0 to disable timeouts. > `protocol?: string` deno TransformStream TransformStream =============== ``` interface TransformStream <I = any, O = any> { readonly readable: [ReadableStream](readablestream)<O>; readonly writable: [WritableStream](writablestream)<I>; } ``` ``` var TransformStream: {prototype: [TransformStream](transformstream); new <I = any, O = any>( transformer?: [Transformer](transformer)<I, O>, writableStrategy?: [QueuingStrategy](queuingstrategy)<I>, readableStrategy?: [QueuingStrategy](queuingstrategy)<O>,): [TransformStream](transformstream)<I, O>; }; ``` Type Parameters --------------- > `I = any` > `O = any` Properties ---------- > `readonly readable: [ReadableStream](readablestream)<O>` > `readonly writable: [WritableStream](writablestream)<I>` deno WebAssembly.compile WebAssembly.compile =================== The `WebAssembly.compile()` function compiles WebAssembly binary code into a `WebAssembly.Module` object. This function is useful if it is necessary to compile a module before it can be instantiated (otherwise, the `WebAssembly.instantiate()` function should be used). [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WebAssembly/compile) ``` function compile(bytes: [BufferSource](buffersource)): Promise<[Module](webassembly.module)>; ``` > `compile(bytes: [BufferSource](buffersource)): Promise<[Module](webassembly.module)>` ### Parameters > `bytes: [BufferSource](buffersource)` ### Return Type > `Promise<[Module](webassembly.module)>` deno Deno.TestContext Deno.TestContext ================ Context that is passed to a testing function, which can be used to either gain information about the current test, or register additional test steps within the current test. ``` interface TestContext {name: string; origin: string; parent?: [TestContext](deno.testcontext); step(definition: [TestStepDefinition](deno.teststepdefinition)): Promise<boolean>; step(name: string, fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>): Promise<boolean>; } ``` Properties ---------- > `name: string` The current test name. > `origin: string` The string URL of the current test. > `parent?: [TestContext](deno.testcontext)` If the current test is a step of another test, the parent test context will be set here. Methods ------- > `step(definition: [TestStepDefinition](deno.teststepdefinition)): Promise<boolean>` Run a sub step of the parent test or step. Returns a promise that resolves to a boolean signifying if the step completed successfully. The returned promise never rejects unless the arguments are invalid. If the test was ignored the promise returns `false`. ``` Deno.test({ name: "a parent test", async fn(t) { console.log("before the step"); await t.step({ name: "step 1", fn(t) { console.log("current step:", t.name); } }); console.log("after the step"); } }); ``` > `step(name: string, fn: (t: [TestContext](deno.testcontext)) => void | Promise<void>): Promise<boolean>` Run a sub step of the parent test or step. Returns a promise that resolves to a boolean signifying if the step completed successfully. The returned promise never rejects unless the arguments are invalid. If the test was ignored the promise returns `false`. ``` Deno.test( "a parent test", async (t) => { console.log("before the step"); await t.step( "step 1", (t) => { console.log("current step:", t.name); } ); console.log("after the step"); } ); ``` deno Deno.PermissionStatusEventMap Deno.PermissionStatusEventMap ============================= ``` interface PermissionStatusEventMap {change: [Event](event); } ``` Properties ---------- > `change: [Event](event)` deno Deno.errors.ConnectionRefused Deno.errors.ConnectionRefused ============================= Raised when the underlying operating system reports that a connection to a resource is refused. ``` class ConnectionRefused extends Error { } ``` Extends ------- > `Error` deno ReadableStreamDefaultControllerCallback ReadableStreamDefaultControllerCallback ======================================= ``` interface ReadableStreamDefaultControllerCallback <R> {(controller: [ReadableStreamDefaultController](readablestreamdefaultcontroller)<R>): void | PromiseLike<void>;} ``` Type Parameters --------------- > `R` Call Signatures --------------- > `(controller: [ReadableStreamDefaultController](readablestreamdefaultcontroller)<R>): void | PromiseLike<void>` deno Blob Blob ==== A file-like object of immutable, raw data. Blobs represent data that isn't necessarily in a JavaScript-native format. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. ``` class Blob { constructor(blobParts?: [BlobPart](blobpart)[], options?: [BlobPropertyBag](blobpropertybag)); readonly size: number; readonly type: string; arrayBuffer(): Promise<ArrayBuffer>; slice( start?: number, end?: number, contentType?: string,): [Blob](blob); stream(): [ReadableStream](readablestream)<Uint8Array>; text(): Promise<string>; } ``` Constructors ------------ > `new Blob(blobParts?: [BlobPart](blobpart)[], options?: [BlobPropertyBag](blobpropertybag))` Properties ---------- > `size: number` > `type: string` Methods ------- > `arrayBuffer(): Promise<ArrayBuffer>` > `slice(start?: number, end?: number, contentType?: string): [Blob](blob)` > `stream(): [ReadableStream](readablestream)<Uint8Array>` > `text(): Promise<string>` deno GPUImageCopyBuffer GPUImageCopyBuffer ================== ``` interface GPUImageCopyBuffer extends [GPUImageDataLayout](gpuimagedatalayout) {buffer: [GPUBuffer](gpubuffer); } ``` Extends ------- > `[GPUImageDataLayout](gpuimagedatalayout)` Properties ---------- > `buffer: [GPUBuffer](gpubuffer)` deno Deno.ListenTlsOptions Deno.ListenTlsOptions ===================== ``` interface ListenTlsOptions extends [ListenOptions](deno.listenoptions) {cert?: string; certFile?: string; key?: string; keyFile?: string; transport?: "tcp"; } ``` Extends ------- > `[ListenOptions](deno.listenoptions)` Properties ---------- > `cert?: string` Cert chain in PEM format > `certFile?: string` Path to a file containing a PEM formatted CA certificate. Requires `--allow-read`. > `key?: string` Server private key in PEM format > `keyFile?: string` Server private key file. Requires `--allow-read`. > `transport?: "tcp"` deno GPURenderBundleEncoderDescriptor GPURenderBundleEncoderDescriptor ================================ ``` interface GPURenderBundleEncoderDescriptor extends [GPURenderPassLayout](gpurenderpasslayout) {depthReadOnly?: boolean; stencilReadOnly?: boolean; } ``` Extends ------- > `[GPURenderPassLayout](gpurenderpasslayout)` Properties ---------- > `depthReadOnly?: boolean` > `stencilReadOnly?: boolean` deno GPUExtent3D GPUExtent3D =========== ``` type GPUExtent3D = number[] | [GPUExtent3DDict](gpuextent3ddict); ``` Type ---- > `number[] | [GPUExtent3DDict](gpuextent3ddict)` deno URLPattern URLPattern ========== The URLPattern API provides a web platform primitive for matching URLs based on a convenient pattern syntax. The syntax is based on path-to-regexp. Wildcards, named capture groups, regular groups, and group modifiers are all supported. ``` // Specify the pattern as structured data. const pattern = new URLPattern({ pathname: "/users/:user" }); const match = pattern.exec("/users/joe"); console.log(match.pathname.groups.user); // joe ``` ``` // Specify a fully qualified string pattern. const pattern = new URLPattern("https://example.com/books/:id"); console.log(pattern.test("https://example.com/books/123")); // true console.log(pattern.test("https://deno.land/books/123")); // false ``` ``` // Specify a relative string pattern with a base URL. const pattern = new URLPattern("/:article", "https://blog.example.com"); console.log(pattern.test("https://blog.example.com/article")); // true console.log(pattern.test("https://blog.example.com/article/123")); // false ``` ``` class URLPattern { constructor(input: [URLPatternInput](urlpatterninput), baseURL?: string); readonly hash: string; readonly hostname: string; readonly password: string; readonly pathname: string; readonly port: string; readonly protocol: string; readonly search: string; readonly username: string; exec(input: [URLPatternInput](urlpatterninput), baseURL?: string): [URLPatternResult](urlpatternresult) | null; test(input: [URLPatternInput](urlpatterninput), baseURL?: string): boolean; } ``` Constructors ------------ > `new URLPattern(input: [URLPatternInput](urlpatterninput), baseURL?: string)` Properties ---------- > `hash: string` The pattern string for the `hash`. > `hostname: string` The pattern string for the `hostname`. > `password: string` The pattern string for the `password`. > `pathname: string` The pattern string for the `pathname`. > `port: string` The pattern string for the `port`. > `protocol: string` The pattern string for the `protocol`. > `search: string` The pattern string for the `search`. > `username: string` The pattern string for the `username`. Methods ------- > `exec(input: [URLPatternInput](urlpatterninput), baseURL?: string): [URLPatternResult](urlpatternresult) | null` Match the given input against the stored pattern. The input can either be provided as a url string (with an optional base), or as individual components in the form of an object. ``` const pattern = new URLPattern("https://example.com/books/:id"); // Match a url string. let match = pattern.exec("https://example.com/books/123"); console.log(match.pathname.groups.id); // 123 // Match a relative url with a base. match = pattern.exec("/books/123", "https://example.com"); console.log(match.pathname.groups.id); // 123 // Match an object of url components. match = pattern.exec({ pathname: "/books/123" }); console.log(match.pathname.groups.id); // 123 ``` > `test(input: [URLPatternInput](urlpatterninput), baseURL?: string): boolean` Test if the given input matches the stored pattern. The input can either be provided as a url string (with an optional base), or as individual components in the form of an object. ``` const pattern = new URLPattern("https://example.com/books/:id"); // Test a url string. console.log(pattern.test("https://example.com/books/123")); // true // Test a relative url with a base. console.log(pattern.test("/books/123", "https://example.com")); // true // Test an object of url components. console.log(pattern.test({ pathname: "/books/123" })); // true ``` deno Deno.stdout Deno.stdout =========== A reference to `stdout` which can be used to write directly to `stdout`. It implements the Deno specific [`Writer`](deno.writer), [`WriterSync`](deno.writersync), and [`Closer`](deno.closer) interfaces as well as provides a [`WritableStream`](writablestream) interface. These are low level constructs, and the [`console`](console) interface is a more straight forward way to interact with `stdout` and `stderr`. ``` const stdout: & [Writer](deno.writer) & [WriterSync](deno.writersync) & [Closer](deno.closer) & {readonly rid: number; readonly writable: [WritableStream](writablestream)<Uint8Array>; }; ``` deno Deno.removeSync Deno.removeSync =============== Synchronously removes the named file or directory. ``` Deno.removeSync("/path/to/empty\_dir/or/file"); Deno.removeSync("/path/to/populated\_dir/or/file", { recursive: true }); ``` Throws error if permission denied, path not found, or path is a non-empty directory and the `recursive` option isn't set to `true`. Requires `allow-write` permission. ``` function removeSync(path: string | [URL](url), options?: [RemoveOptions](deno.removeoptions)): void; ``` > `removeSync(path: string | [URL](url), options?: [RemoveOptions](deno.removeoptions)): void` ### Parameters > `path: string | [URL](url)` > `options?: [RemoveOptions](deno.removeoptions) optional` ### Return Type > `void` deno RequestCredentials RequestCredentials ================== ``` type RequestCredentials = "include" | "omit" | "same-origin"; ``` Type ---- > `"include" | "omit" | "same-origin"` deno Location Location ======== The location (URL) of the object it is linked to. Changes done on it are reflected on the object it relates to. Accessible via `globalThis.location`. ``` class Location { constructor(); readonly ancestorOrigins: [DOMStringList](domstringlist); hash: string; host: string; hostname: string; href: string; readonly origin: string; pathname: string; port: string; protocol: string; search: string; assign(url: string): void; reload(): void; reload(forcedReload: boolean): void; replace(url: string): void; toString(): string; } ``` Constructors ------------ > `new Location()` Properties ---------- > `ancestorOrigins: [DOMStringList](domstringlist)` Returns a DOMStringList object listing the origins of the ancestor browsing contexts, from the parent browsing context to the top-level browsing context. Always empty in Deno. > `hash: string` Returns the Location object's URL's fragment (includes leading "#" if non-empty). Cannot be set in Deno. > `host: string` Returns the Location object's URL's host and port (if different from the default port for the scheme). Cannot be set in Deno. > `hostname: string` Returns the Location object's URL's host. Cannot be set in Deno. > `href: string` Returns the Location object's URL. Cannot be set in Deno. > `origin: string` Returns the Location object's URL's origin. > `pathname: string` Returns the Location object's URL's path. Cannot be set in Deno. > `port: string` Returns the Location object's URL's port. Cannot be set in Deno. > `protocol: string` Returns the Location object's URL's scheme. Cannot be set in Deno. > `search: string` Returns the Location object's URL's query (includes leading "?" if non-empty). Cannot be set in Deno. Methods ------- > `assign(url: string): void` Navigates to the given URL. Cannot be set in Deno. > `reload(): void` Reloads the current page. Disabled in Deno. > `reload(forcedReload: boolean): void deprecated` @deprecated > `replace(url: string): void` Removes the current page from the session history and navigates to the given URL. Disabled in Deno. > `toString(): string` deno GPUCompilationInfo GPUCompilationInfo ================== ``` interface GPUCompilationInfo { readonly messages: ReadonlyArray<[GPUCompilationMessage](gpucompilationmessage)>; } ``` Properties ---------- > `readonly messages: ReadonlyArray<[GPUCompilationMessage](gpucompilationmessage)>` deno UnderlyingByteSource UnderlyingByteSource ==================== ``` interface UnderlyingByteSource {autoAllocateChunkSize?: number; cancel?: [ReadableStreamErrorCallback](readablestreamerrorcallback); pull?: [ReadableByteStreamControllerCallback](readablebytestreamcontrollercallback); start?: [ReadableByteStreamControllerCallback](readablebytestreamcontrollercallback); type: "bytes"; } ``` Properties ---------- > `autoAllocateChunkSize?: number` > `cancel?: [ReadableStreamErrorCallback](readablestreamerrorcallback)` > `pull?: [ReadableByteStreamControllerCallback](readablebytestreamcontrollercallback)` > `start?: [ReadableByteStreamControllerCallback](readablebytestreamcontrollercallback)` > `type: "bytes"` deno Deno.openSync Deno.openSync ============= Synchronously open a file and return an instance of [`Deno.FsFile`](deno#FsFile). The file does not need to previously exist if using the `create` or `createNew` open options. It is the caller's responsibility to close the file when finished with it. ``` const file = Deno.openSync("/foo/bar.txt", { read: true, write: true }); // Do work with file Deno.close(file.rid); ``` Requires `allow-read` and/or `allow-write` permissions depending on options. ``` function openSync(path: string | [URL](url), options?: [OpenOptions](deno.openoptions)): [FsFile](deno.fsfile); ``` > `openSync(path: string | [URL](url), options?: [OpenOptions](deno.openoptions)): [FsFile](deno.fsfile)` ### Parameters > `path: string | [URL](url)` > `options?: [OpenOptions](deno.openoptions) optional` ### Return Type > `[FsFile](deno.fsfile)` deno HmacKeyGenParams HmacKeyGenParams ================ ``` interface HmacKeyGenParams extends [Algorithm](algorithm) {hash: [HashAlgorithmIdentifier](hashalgorithmidentifier); length?: number; } ``` Extends ------- > `[Algorithm](algorithm)` Properties ---------- > `hash: [HashAlgorithmIdentifier](hashalgorithmidentifier)` > `length?: number` deno GPUBuffer GPUBuffer ========= ``` class GPUBuffer implements [GPUObjectBase](gpuobjectbase) { label: string; destroy(): undefined; getMappedRange(offset?: number, size?: number): ArrayBuffer; mapAsync( mode: [GPUMapModeFlags](gpumapmodeflags), offset?: number, size?: number,): Promise<undefined>; unmap(): undefined; } ``` Implements ---------- > `[GPUObjectBase](gpuobjectbase)` Properties ---------- > `label: string` Methods ------- > `destroy(): undefined` > `getMappedRange(offset?: number, size?: number): ArrayBuffer` > `mapAsync(mode: [GPUMapModeFlags](gpumapmodeflags), offset?: number, size?: number): Promise<undefined>` > `unmap(): undefined` deno AesCtrParams AesCtrParams ============ ``` interface AesCtrParams extends [Algorithm](algorithm) {counter: [BufferSource](buffersource); length: number; } ``` Extends ------- > `[Algorithm](algorithm)` Properties ---------- > `counter: [BufferSource](buffersource)` > `length: number` deno Deno.PermissionOptionsObject Deno.PermissionOptionsObject ============================ A set of options which can define the permissions within a test or worker context at a highly specific level. ``` interface PermissionOptionsObject {env?: "inherit" | boolean | string[]; ffi?: "inherit" | boolean | Array<string | [URL](url)>; hrtime?: "inherit" | boolean; net?: "inherit" | boolean | string[]; read?: "inherit" | boolean | Array<string | [URL](url)>; run?: "inherit" | boolean | Array<string | [URL](url)>; sys?: "inherit" | boolean | string[]; write?: "inherit" | boolean | Array<string | [URL](url)>; } ``` Properties ---------- > `env?: "inherit" | boolean | string[]` Specifies if the `env` permission should be requested or revoked. If set to `"inherit"`, the current `env` permission will be inherited. If set to `true`, the global `env` permission will be requested. If set to `false`, the global `env` permission will be revoked. Defaults to `false`. > `ffi?: "inherit" | boolean | Array<string | [URL](url)>` Specifies if the `ffi` permission should be requested or revoked. If set to `"inherit"`, the current `ffi` permission will be inherited. If set to `true`, the global `ffi` permission will be requested. If set to `false`, the global `ffi` permission will be revoked. Defaults to `false`. > `hrtime?: "inherit" | boolean` Specifies if the `hrtime` permission should be requested or revoked. If set to `"inherit"`, the current `hrtime` permission will be inherited. If set to `true`, the global `hrtime` permission will be requested. If set to `false`, the global `hrtime` permission will be revoked. Defaults to `false`. > `net?: "inherit" | boolean | string[]` Specifies if the `net` permission should be requested or revoked. if set to `"inherit"`, the current `net` permission will be inherited. if set to `true`, the global `net` permission will be requested. if set to `false`, the global `net` permission will be revoked. if set to `string[]`, the `net` permission will be requested with the specified host strings with the format `"<host>[:<port>]`. Defaults to `false`. Examples: ``` import { assertEquals } from "https://deno.land/std/testing/asserts.ts"; Deno.test({ name: "inherit", permissions: { net: "inherit", }, async fn() { const status = await Deno.permissions.query({ name: "net" }) assertEquals(status.state, "granted"); }, }); ``` ``` import { assertEquals } from "https://deno.land/std/testing/asserts.ts"; Deno.test({ name: "true", permissions: { net: true, }, async fn() { const status = await Deno.permissions.query({ name: "net" }); assertEquals(status.state, "granted"); }, }); ``` ``` import { assertEquals } from "https://deno.land/std/testing/asserts.ts"; Deno.test({ name: "false", permissions: { net: false, }, async fn() { const status = await Deno.permissions.query({ name: "net" }); assertEquals(status.state, "denied"); }, }); ``` ``` import { assertEquals } from "https://deno.land/std/testing/asserts.ts"; Deno.test({ name: "localhost:8080", permissions: { net: ["localhost:8080"], }, async fn() { const status = await Deno.permissions.query({ name: "net", host: "localhost:8080" }); assertEquals(status.state, "granted"); }, }); ``` > `read?: "inherit" | boolean | Array<string | [URL](url)>` Specifies if the `read` permission should be requested or revoked. If set to `"inherit"`, the current `read` permission will be inherited. If set to `true`, the global `read` permission will be requested. If set to `false`, the global `read` permission will be revoked. If set to `Array<string | URL>`, the `read` permission will be requested with the specified file paths. Defaults to `false`. > `run?: "inherit" | boolean | Array<string | [URL](url)>` Specifies if the `run` permission should be requested or revoked. If set to `"inherit"`, the current `run` permission will be inherited. If set to `true`, the global `run` permission will be requested. If set to `false`, the global `run` permission will be revoked. Defaults to `false`. > `sys?: "inherit" | boolean | string[]` Specifies if the `sys` permission should be requested or revoked. If set to `"inherit"`, the current `sys` permission will be inherited. If set to `true`, the global `sys` permission will be requested. If set to `false`, the global `sys` permission will be revoked. Defaults to `false`. > `write?: "inherit" | boolean | Array<string | [URL](url)>` Specifies if the `write` permission should be requested or revoked. If set to `"inherit"`, the current `write` permission will be inherited. If set to `true`, the global `write` permission will be requested. If set to `false`, the global `write` permission will be revoked. If set to `Array<string | URL>`, the `write` permission will be requested with the specified file paths. Defaults to `false`.
programming_docs
deno Deno.memoryUsage Deno.memoryUsage ================ Returns an object describing the memory usage of the Deno process and the V8 subsystem measured in bytes. ``` function memoryUsage(): [MemoryUsage](deno.memoryusage); ``` > `memoryUsage(): [MemoryUsage](deno.memoryusage)` ### Return Type > `[MemoryUsage](deno.memoryusage)` deno ReadableStreamErrorCallback ReadableStreamErrorCallback =========================== ``` interface ReadableStreamErrorCallback {(reason: any): void | PromiseLike<void>;} ``` Call Signatures --------------- > `(reason: any): void | PromiseLike<void>` deno Deno.readDir Deno.readDir ============ Reads the directory given by `path` and returns an async iterable of [`Deno.DirEntry`](deno#DirEntry). ``` for await (const dirEntry of Deno.readDir("/")) { console.log(dirEntry.name); } ``` Throws error if `path` is not a directory. Requires `allow-read` permission. ``` function readDir(path: string | [URL](url)): AsyncIterable<[DirEntry](deno.direntry)>; ``` > `readDir(path: string | [URL](url)): AsyncIterable<[DirEntry](deno.direntry)>` ### Parameters > `path: string | [URL](url)` ### Return Type > `AsyncIterable<[DirEntry](deno.direntry)>` deno GPUStencilFaceState GPUStencilFaceState =================== ``` interface GPUStencilFaceState {compare?: [GPUCompareFunction](gpucomparefunction); depthFailOp?: [GPUStencilOperation](gpustenciloperation); failOp?: [GPUStencilOperation](gpustenciloperation); passOp?: [GPUStencilOperation](gpustenciloperation); } ``` Properties ---------- > `compare?: [GPUCompareFunction](gpucomparefunction)` > `depthFailOp?: [GPUStencilOperation](gpustenciloperation)` > `failOp?: [GPUStencilOperation](gpustenciloperation)` > `passOp?: [GPUStencilOperation](gpustenciloperation)` deno EventListener EventListener ============= ``` interface EventListener {(evt: [Event](event)): void | Promise<void>;} ``` Call Signatures --------------- > `(evt: [Event](event)): void | Promise<void>` deno GPUAutoLayoutMode GPUAutoLayoutMode ================= ``` type GPUAutoLayoutMode = "auto"; ``` Type ---- > `"auto"` deno ProgressEvent ProgressEvent ============= Events measuring progress of an underlying process, like an HTTP request (for an XMLHttpRequest, or the loading of the underlying resource of an , , , or ). ``` class ProgressEvent<T extends [EventTarget](eventtarget) = [EventTarget](eventtarget)> extends Event { constructor(type: string, eventInitDict?: [ProgressEventInit](progresseventinit)); readonly lengthComputable: boolean; readonly loaded: number; readonly target: T | null; readonly total: number; } ``` Type Parameters --------------- > `T extends [EventTarget](eventtarget) = [EventTarget](eventtarget)` Extends ------- > `Event` Constructors ------------ > `new ProgressEvent(type: string, eventInitDict?: [ProgressEventInit](progresseventinit))` Properties ---------- > `lengthComputable: boolean` > `loaded: number` > `target: T | null` > `total: number` deno EventListenerOrEventListenerObject EventListenerOrEventListenerObject ================================== ``` type EventListenerOrEventListenerObject = [EventListener](eventlistener) | [EventListenerObject](eventlistenerobject); ``` Type ---- > `[EventListener](eventlistener) | [EventListenerObject](eventlistenerobject)` deno Deno.Closer Deno.Closer =========== An abstract interface which when implemented provides an interface to close files/resources that were previously opened. ``` interface Closer {close(): void; } ``` Methods ------- > `close(): void` Closes the resource, "freeing" the backing file/resource. deno Deno.startTls Deno.startTls ============= Start TLS handshake from an existing connection using an optional list of CA certificates, and hostname (default is "127.0.0.1"). Specifying CA certs is optional. By default the configured root certificates are used. Using this function requires that the other end of the connection is prepared for a TLS handshake. ``` const conn = await Deno.connect({ port: 80, hostname: "127.0.0.1" }); const caCert = await Deno.readTextFile("./certs/my\_custom\_root\_CA.pem"); const tlsConn = await Deno.startTls(conn, { caCerts: [caCert], hostname: "localhost" }); ``` Requires `allow-net` permission. ``` function startTls(conn: [Conn](deno.conn), options?: [StartTlsOptions](deno.starttlsoptions)): Promise<[TlsConn](deno.tlsconn)>; ``` > `startTls(conn: [Conn](deno.conn), options?: [StartTlsOptions](deno.starttlsoptions)): Promise<[TlsConn](deno.tlsconn)>` ### Parameters > `conn: [Conn](deno.conn)` > `options?: [StartTlsOptions](deno.starttlsoptions) optional` ### Return Type > `Promise<[TlsConn](deno.tlsconn)>` deno Deno.fsyncSync Deno.fsyncSync ============== Synchronously flushes any pending data and metadata operations of the given file stream to disk. ``` const file = Deno.openSync( "my\_file.txt", { read: true, write: true, create: true }, ); Deno.writeSync(file.rid, new TextEncoder().encode("Hello World")); Deno.ftruncateSync(file.rid, 1); Deno.fsyncSync(file.rid); console.log(new TextDecoder().decode(Deno.readFileSync("my\_file.txt"))); // H ``` ``` function fsyncSync(rid: number): void; ``` > `fsyncSync(rid: number): void` ### Parameters > `rid: number` ### Return Type > `void` deno GPUValidationError GPUValidationError ================== ``` class GPUValidationError extends GPUError { constructor(message: string);} ``` Extends ------- > `GPUError` Constructors ------------ > `new GPUValidationError(message: string)` deno TextDecodeOptions TextDecodeOptions ================= ``` interface TextDecodeOptions {stream?: boolean; } ``` Properties ---------- > `stream?: boolean` deno Deno.stderr Deno.stderr =========== A reference to `stderr` which can be used to write directly to `stderr`. It implements the Deno specific [`Writer`](deno.writer), [`WriterSync`](deno.writersync), and [`Closer`](deno.closer) interfaces as well as provides a [`WritableStream`](writablestream) interface. These are low level constructs, and the [`console`](console) interface is a more straight forward way to interact with `stdout` and `stderr`. ``` const stderr: & [Writer](deno.writer) & [WriterSync](deno.writersync) & [Closer](deno.closer) & {readonly rid: number; readonly writable: [WritableStream](writablestream)<Uint8Array>; }; ``` deno Deno.Seeker Deno.Seeker =========== An abstract interface which when implemented provides an interface to seek within an open file/resource asynchronously. ``` interface Seeker {seek(offset: number, whence: [SeekMode](deno.seekmode)): Promise<number>; } ``` Methods ------- > `seek(offset: number, whence: [SeekMode](deno.seekmode)): Promise<number>` Seek sets the offset for the next `read()` or `write()` to offset, interpreted according to `whence`: `Start` means relative to the start of the file, `Current` means relative to the current offset, and `End` means relative to the end. Seek resolves to the new offset relative to the start of the file. Seeking to an offset before the start of the file is an error. Seeking to any positive offset is legal, but the behavior of subsequent I/O operations on the underlying object is implementation-dependent. It resolves with the updated offset. deno RequestCache RequestCache ============ ``` type RequestCache = | "default" | "force-cache" | "no-cache" | "no-store" | "only-if-cached" | "reload"; ``` Type ---- > `"default" | "force-cache" | "no-cache" | "no-store" | "only-if-cached" | "reload"` deno GPUCommandEncoderDescriptor GPUCommandEncoderDescriptor =========================== ``` interface GPUCommandEncoderDescriptor extends [GPUObjectDescriptorBase](gpuobjectdescriptorbase) {} ``` Extends ------- > `[GPUObjectDescriptorBase](gpuobjectdescriptorbase)` deno EventTarget EventTarget =========== EventTarget is a DOM interface implemented by objects that can receive events and may have listeners for them. ``` class EventTarget { addEventListener( type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject) | null, options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void; dispatchEvent(event: [Event](event)): boolean; removeEventListener( type: string, callback: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject) | null, options?: [EventListenerOptions](eventlisteneroptions) | boolean,): void; } ``` Methods ------- > `addEventListener(type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject) | null, options?: boolean | [AddEventListenerOptions](addeventlisteneroptions)): void` Appends an event listener for events whose type attribute value is type. The callback argument sets the callback that will be invoked when the event is dispatched. The options argument sets listener-specific options. For compatibility this can be a boolean, in which case the method behaves exactly as if the value was specified as options's capture. When set to true, options's capture prevents callback from being invoked when the event's eventPhase attribute value is BUBBLING\_PHASE. When false (or not present), callback will not be invoked when event's eventPhase attribute value is CAPTURING\_PHASE. Either way, callback will be invoked if event's eventPhase attribute value is AT\_TARGET. When set to true, options's passive indicates that the callback will not cancel the event by invoking preventDefault(). This is used to enable performance optimizations described in § 2.8 Observing event listeners. When set to true, options's once indicates that the callback will only be invoked once after which the event listener will be removed. The event listener is appended to target's event listener list and is not appended if it has the same type, callback, and capture. > `dispatchEvent(event: [Event](event)): boolean` Dispatches a synthetic event event to target and returns true if either event's cancelable attribute value is false or its preventDefault() method was not invoked, and false otherwise. > `removeEventListener(type: string, callback: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject) | null, options?: [EventListenerOptions](eventlisteneroptions) | boolean): void` Removes the event listener in target's event listener list with the same type, callback, and options. deno Deno.Conn Deno.Conn ========= ``` interface Conn extends [Reader](deno.reader), [Writer](deno.writer), [Closer](deno.closer) { readonly localAddr: [Addr](deno.addr); readonly readable: [ReadableStream](readablestream)<Uint8Array>; readonly remoteAddr: [Addr](deno.addr); readonly rid: number; readonly writable: [WritableStream](writablestream)<Uint8Array>; closeWrite(): Promise<void>; } ``` Extends ------- > `[Reader](deno.reader)` > `[Writer](deno.writer)` > `[Closer](deno.closer)` Properties ---------- > `readonly localAddr: [Addr](deno.addr)` The local address of the connection. > `readonly readable: [ReadableStream](readablestream)<Uint8Array>` > `readonly remoteAddr: [Addr](deno.addr)` The remote address of the connection. > `readonly rid: number` The resource ID of the connection. > `readonly writable: [WritableStream](writablestream)<Uint8Array>` Methods ------- > `closeWrite(): Promise<void>` Shuts down (`shutdown(2)`) the write side of the connection. Most callers should just use `close()`. deno AesDerivedKeyParams AesDerivedKeyParams =================== ``` interface AesDerivedKeyParams extends [Algorithm](algorithm) {length: number; } ``` Extends ------- > `[Algorithm](algorithm)` Properties ---------- > `length: number` deno WebAssembly.compileStreaming WebAssembly.compileStreaming ============================ The `WebAssembly.compileStreaming()` function compiles a `WebAssembly.Module` directly from a streamed underlying source. This function is useful if it is necessary to a compile a module before it can be instantiated (otherwise, the `WebAssembly.instantiateStreaming()` function should be used). [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WebAssembly/compileStreaming) ``` function compileStreaming(source: [Response](response) | Promise<[Response](response)>): Promise<[Module](webassembly.module)>; ``` > `compileStreaming(source: [Response](response) | Promise<[Response](response)>): Promise<[Module](webassembly.module)>` ### Parameters > `source: [Response](response) | Promise<[Response](response)>` ### Return Type > `Promise<[Module](webassembly.module)>` deno AlgorithmIdentifier AlgorithmIdentifier =================== ``` type AlgorithmIdentifier = string | [Algorithm](algorithm); ``` Type ---- > `string | [Algorithm](algorithm)` deno Deno.errors.NotFound Deno.errors.NotFound ==================== Raised when the underlying operating system indicates that the file was not found. ``` class NotFound extends Error { } ``` Extends ------- > `Error` deno Deno.chdir Deno.chdir ========== Change the current working directory to the specified path. ``` Deno.chdir("/home/userA"); Deno.chdir("../userB"); Deno.chdir("C:\\Program Files (x86)\\Java"); ``` Throws [`Deno.errors.NotFound`](deno#errors_NotFound) if directory not found. Throws [`Deno.errors.PermissionDenied`](deno#errors_PermissionDenied) if the user does not have operating system file access rights. Requires `allow-read` permission. ``` function chdir(directory: string | [URL](url)): void; ``` > `chdir(directory: string | [URL](url)): void` ### Parameters > `directory: string | [URL](url)` ### Return Type > `void` deno WindowEventMap WindowEventMap ============== ``` interface WindowEventMap {error: [ErrorEvent](errorevent); unhandledrejection: [PromiseRejectionEvent](promiserejectionevent); } ``` Properties ---------- > `error: [ErrorEvent](errorevent)` > `unhandledrejection: [PromiseRejectionEvent](promiserejectionevent)` deno WebAssembly.Table WebAssembly.Table ================= The `WebAssembly.Table()` object is a JavaScript wrapper object — an array-like structure representing a WebAssembly Table, which stores function references. A table created by JavaScript or in WebAssembly code will be accessible and mutable from both JavaScript and WebAssembly. [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WebAssembly/Table) ``` class Table { constructor(descriptor: [TableDescriptor](webassembly.tabledescriptor)); readonly length: number; get(index: number): Function | null; grow(delta: number): number; set(index: number, value: Function | null): void; } ``` Constructors ------------ > `new Table(descriptor: [TableDescriptor](webassembly.tabledescriptor))` Creates a new `Table` object. Properties ---------- > `length: number` Returns the length of the table, i.e. the number of elements. Methods ------- > `get(index: number): Function | null` Accessor function — gets the element stored at a given index. > `grow(delta: number): number` Increases the size of the `Table` instance by a specified number of elements. > `set(index: number, value: Function | null): void` Sets an element stored at a given index to a given value. deno Deno.InspectOptions Deno.InspectOptions =================== Option which can be specified when performing [`Deno.inspect`](deno#inspect). ``` interface InspectOptions {colors?: boolean; compact?: boolean; depth?: number; getters?: boolean; iterableLimit?: number; showHidden?: boolean; showProxy?: boolean; sorted?: boolean; strAbbreviateSize?: number; trailingComma?: boolean; } ``` Properties ---------- > `colors?: boolean` Stylize output with ANSI colors. Defaults to `false`. > `compact?: boolean` Try to fit more than one entry of a collection on the same line. Defaults to `true`. > `depth?: number` Traversal depth for nested objects. Defaults to `4`. > `getters?: boolean` * Evaluate the result of calling getters. Defaults to `false`. > `iterableLimit?: number` The maximum number of iterable entries to print. Defaults to `100`. > `showHidden?: boolean` Show an object's non-enumerable properties. Defaults to `false`. > `showProxy?: boolean` Show a Proxy's target and handler. Defaults to `false`. > `sorted?: boolean` Sort Object, Set and Map entries by key. Defaults to `false`. > `strAbbreviateSize?: number` The maximum length of a string before it is truncated with an ellipsis. > `trailingComma?: boolean` Add a trailing comma for multiline collections. Defaults to `false`. deno GPUCommandBufferDescriptor GPUCommandBufferDescriptor ========================== ``` interface GPUCommandBufferDescriptor extends [GPUObjectDescriptorBase](gpuobjectdescriptorbase) {} ``` Extends ------- > `[GPUObjectDescriptorBase](gpuobjectdescriptorbase)` deno GPUStoreOp GPUStoreOp ========== ``` type GPUStoreOp = "store" | "discard"; ``` Type ---- > `"store" | "discard"` deno Deno.RunPermissionDescriptor Deno.RunPermissionDescriptor ============================ ``` interface RunPermissionDescriptor {command?: string | [URL](url); name: "run"; } ``` Properties ---------- > `command?: string | [URL](url)` > `name: "run"` deno EventInit EventInit ========= ``` interface EventInit {bubbles?: boolean; cancelable?: boolean; composed?: boolean; } ``` Properties ---------- > `bubbles?: boolean` > `cancelable?: boolean` > `composed?: boolean` deno queueMicrotask queueMicrotask ============== A microtask is a short function which is executed after the function or module which created it exits and only if the JavaScript execution stack is empty, but before returning control to the event loop being used to drive the script's execution environment. This event loop may be either the main event loop or the event loop driving a web worker. ``` queueMicrotask(() => { console.log('This event loop stack is complete'); }); ``` ``` function queueMicrotask(func: [VoidFunction](voidfunction)): void; ``` > `queueMicrotask(func: [VoidFunction](voidfunction)): void` ### Parameters > `func: [VoidFunction](voidfunction)` ### Return Type > `void` deno Deno.errors.BadResource Deno.errors.BadResource ======================= The underlying IO resource is invalid or closed, and so the operation could not be performed. ``` class BadResource extends Error { } ``` Extends ------- > `Error` deno Deno.stat Deno.stat ========= Resolves to a [`Deno.FileInfo`](deno#FileInfo) for the specified `path`. Will always follow symlinks. ``` import { assert } from "https://deno.land/std/testing/asserts.ts"; const fileInfo = await Deno.stat("hello.txt"); assert(fileInfo.isFile); ``` Requires `allow-read` permission. ``` function stat(path: string | [URL](url)): Promise<[FileInfo](deno.fileinfo)>; ``` > `stat(path: string | [URL](url)): Promise<[FileInfo](deno.fileinfo)>` ### Parameters > `path: string | [URL](url)` ### Return Type > `Promise<[FileInfo](deno.fileinfo)>` deno KeyAlgorithm KeyAlgorithm ============ ``` interface KeyAlgorithm {name: string; } ``` Properties ---------- > `name: string` deno Deno.remove Deno.remove =========== Removes the named file or directory. ``` await Deno.remove("/path/to/empty\_dir/or/file"); await Deno.remove("/path/to/populated\_dir/or/file", { recursive: true }); ``` Throws error if permission denied, path not found, or path is a non-empty directory and the `recursive` option isn't set to `true`. Requires `allow-write` permission. ``` function remove(path: string | [URL](url), options?: [RemoveOptions](deno.removeoptions)): Promise<void>; ``` > `remove(path: string | [URL](url), options?: [RemoveOptions](deno.removeoptions)): Promise<void>` ### Parameters > `path: string | [URL](url)` > `options?: [RemoveOptions](deno.removeoptions) optional` ### Return Type > `Promise<void>`
programming_docs
deno GPUFeatureName GPUFeatureName ============== ``` type GPUFeatureName = | "depth-clip-control" | "depth24unorm-stencil8" | "depth32float-stencil8" | "pipeline-statistics-query" | "texture-compression-bc" | "texture-compression-etc2" | "texture-compression-astc" | "timestamp-query" | "indirect-first-instance" | "shader-f16" | "mappable-primary-buffers" | "sampled-texture-binding-array" | "sampled-texture-array-dynamic-indexing" | "sampled-texture-array-non-uniform-indexing" | "unsized-binding-array" | "multi-draw-indirect" | "multi-draw-indirect-count" | "push-constants" | "address-mode-clamp-to-border" | "texture-adapter-specific-format-features" | "shader-float64" | "vertex-attribute-64bit"; ``` Type ---- > `"depth-clip-control" | "depth24unorm-stencil8" | "depth32float-stencil8" | "pipeline-statistics-query" | "texture-compression-bc" | "texture-compression-etc2" | "texture-compression-astc" | "timestamp-query" | "indirect-first-instance" | "shader-f16" | "mappable-primary-buffers" | "sampled-texture-binding-array" | "sampled-texture-array-dynamic-indexing" | "sampled-texture-array-non-uniform-indexing" | "unsized-binding-array" | "multi-draw-indirect" | "multi-draw-indirect-count" | "push-constants" | "address-mode-clamp-to-border" | "texture-adapter-specific-format-features" | "shader-float64" | "vertex-attribute-64bit"` deno Deno.copyFileSync Deno.copyFileSync ================= Synchronously copies the contents and permissions of one file to another specified path, by default creating a new file if needed, else overwriting. Fails if target path is a directory or is unwritable. ``` Deno.copyFileSync("from.txt", "to.txt"); ``` Requires `allow-read` permission on `fromPath`. Requires `allow-write` permission on `toPath`. ``` function copyFileSync(fromPath: string | [URL](url), toPath: string | [URL](url)): void; ``` > `copyFileSync(fromPath: string | [URL](url), toPath: string | [URL](url)): void` ### Parameters > `fromPath: string | [URL](url)` > `toPath: string | [URL](url)` ### Return Type > `void` deno GPUAdapter GPUAdapter ========== ``` class GPUAdapter { readonly features: [GPUSupportedFeatures](gpusupportedfeatures); readonly isFallbackAdapter: boolean; readonly limits: [GPUSupportedLimits](gpusupportedlimits); requestAdapterInfo(unmaskHints?: string[]): Promise<[GPUAdapterInfo](gpuadapterinfo)>; requestDevice(descriptor?: [GPUDeviceDescriptor](gpudevicedescriptor)): Promise<[GPUDevice](gpudevice)>; } ``` Properties ---------- > `features: [GPUSupportedFeatures](gpusupportedfeatures)` > `isFallbackAdapter: boolean` > `limits: [GPUSupportedLimits](gpusupportedlimits)` Methods ------- > `requestAdapterInfo(unmaskHints?: string[]): Promise<[GPUAdapterInfo](gpuadapterinfo)>` > `requestDevice(descriptor?: [GPUDeviceDescriptor](gpudevicedescriptor)): Promise<[GPUDevice](gpudevice)>` deno Deno.realPath Deno.realPath ============= Resolves to the absolute normalized path, with symbolic links resolved. ``` // e.g. given /home/alice/file.txt and current directory /home/alice await Deno.symlink("file.txt", "symlink\_file.txt"); const realPath = await Deno.realPath("./file.txt"); const realSymLinkPath = await Deno.realPath("./symlink\_file.txt"); console.log(realPath); // outputs "/home/alice/file.txt" console.log(realSymLinkPath); // outputs "/home/alice/file.txt" ``` Requires `allow-read` permission for the target path. Also requires `allow-read` permission for the `CWD` if the target path is relative. ``` function realPath(path: string | [URL](url)): Promise<string>; ``` > `realPath(path: string | [URL](url)): Promise<string>` ### Parameters > `path: string | [URL](url)` ### Return Type > `Promise<string>` deno Deno.mainModule Deno.mainModule =============== The URL of the entrypoint module entered from the command-line. ``` const mainModule: string; ``` deno AbortSignalEventMap AbortSignalEventMap =================== ``` interface AbortSignalEventMap {abort: [Event](event); } ``` Properties ---------- > `abort: [Event](event)` deno GPUVertexFormat GPUVertexFormat =============== ``` type GPUVertexFormat = | "uint8x2" | "uint8x4" | "sint8x2" | "sint8x4" | "unorm8x2" | "unorm8x4" | "snorm8x2" | "snorm8x4" | "uint16x2" | "uint16x4" | "sint16x2" | "sint16x4" | "unorm16x2" | "unorm16x4" | "snorm16x2" | "snorm16x4" | "float16x2" | "float16x4" | "float32" | "float32x2" | "float32x3" | "float32x4" | "uint32" | "uint32x2" | "uint32x3" | "uint32x4" | "sint32" | "sint32x2" | "sint32x3" | "sint32x4"; ``` Type ---- > `"uint8x2" | "uint8x4" | "sint8x2" | "sint8x4" | "unorm8x2" | "unorm8x4" | "snorm8x2" | "snorm8x4" | "uint16x2" | "uint16x4" | "sint16x2" | "sint16x4" | "unorm16x2" | "unorm16x4" | "snorm16x2" | "snorm16x4" | "float16x2" | "float16x4" | "float32" | "float32x2" | "float32x3" | "float32x4" | "uint32" | "uint32x2" | "uint32x3" | "uint32x4" | "sint32" | "sint32x2" | "sint32x3" | "sint32x4"` deno Deno.SeekerSync Deno.SeekerSync =============== An abstract interface which when implemented provides an interface to seek within an open file/resource synchronously. ``` interface SeekerSync {seekSync(offset: number, whence: [SeekMode](deno.seekmode)): number; } ``` Methods ------- > `seekSync(offset: number, whence: [SeekMode](deno.seekmode)): number` Seek sets the offset for the next `readSync()` or `writeSync()` to offset, interpreted according to `whence`: `Start` means relative to the start of the file, `Current` means relative to the current offset, and `End` means relative to the end. Seeking to an offset before the start of the file is an error. Seeking to any positive offset is legal, but the behavior of subsequent I/O operations on the underlying object is implementation-dependent. It returns the updated offset. deno Deno.fstatSync Deno.fstatSync ============== Synchronously returns a `Deno.FileInfo` for the given file stream. ``` import { assert } from "https://deno.land/std/testing/asserts.ts"; const file = Deno.openSync("file.txt", { read: true }); const fileInfo = Deno.fstatSync(file.rid); assert(fileInfo.isFile); ``` ``` function fstatSync(rid: number): [FileInfo](deno.fileinfo); ``` > `fstatSync(rid: number): [FileInfo](deno.fileinfo)` ### Parameters > `rid: number` ### Return Type > `[FileInfo](deno.fileinfo)` deno KeyUsage KeyUsage ======== ``` type KeyUsage = | "decrypt" | "deriveBits" | "deriveKey" | "encrypt" | "sign" | "unwrapKey" | "verify" | "wrapKey"; ``` Type ---- > `"decrypt" | "deriveBits" | "deriveKey" | "encrypt" | "sign" | "unwrapKey" | "verify" | "wrapKey"` deno RsaHashedImportParams RsaHashedImportParams ===================== ``` interface RsaHashedImportParams extends [Algorithm](algorithm) {hash: [HashAlgorithmIdentifier](hashalgorithmidentifier); } ``` Extends ------- > `[Algorithm](algorithm)` Properties ---------- > `hash: [HashAlgorithmIdentifier](hashalgorithmidentifier)` deno Deno.resolveDns Deno.resolveDns =============== ``` function resolveDns( query: string, recordType: | "A" | "AAAA" | "ANAME" | "CNAME" | "NS" | "PTR" , options?: [ResolveDnsOptions](deno.resolvednsoptions),): Promise<string[]>; function resolveDns( query: string, recordType: "CAA", options?: [ResolveDnsOptions](deno.resolvednsoptions),): Promise<[CAARecord](deno.caarecord)[]>; function resolveDns( query: string, recordType: "MX", options?: [ResolveDnsOptions](deno.resolvednsoptions),): Promise<[MXRecord](deno.mxrecord)[]>; function resolveDns( query: string, recordType: "NAPTR", options?: [ResolveDnsOptions](deno.resolvednsoptions),): Promise<[NAPTRRecord](deno.naptrrecord)[]>; function resolveDns( query: string, recordType: "SOA", options?: [ResolveDnsOptions](deno.resolvednsoptions),): Promise<[SOARecord](deno.soarecord)[]>; function resolveDns( query: string, recordType: "SRV", options?: [ResolveDnsOptions](deno.resolvednsoptions),): Promise<[SRVRecord](deno.srvrecord)[]>; function resolveDns( query: string, recordType: "TXT", options?: [ResolveDnsOptions](deno.resolvednsoptions),): Promise<string[][]>; function resolveDns( query: string, recordType: [RecordType](deno.recordtype), options?: [ResolveDnsOptions](deno.resolvednsoptions),): Promise<string[] | [CAARecord](deno.caarecord)[] | [MXRecord](deno.mxrecord)[] | [NAPTRRecord](deno.naptrrecord)[] | [SOARecord](deno.soarecord)[] | [SRVRecord](deno.srvrecord)[] | string[][]>; ``` > `resolveDns(query: string, recordType: "A" | "AAAA" | "ANAME" | "CNAME" | "NS" | "PTR", options?: [ResolveDnsOptions](deno.resolvednsoptions)): Promise<string[]>` ### Parameters > `query: string` > `recordType: "A" | "AAAA" | "ANAME" | "CNAME" | "NS" | "PTR"` > `options?: [ResolveDnsOptions](deno.resolvednsoptions) optional` ### Return Type > `Promise<string[]>` > `resolveDns(query: string, recordType: "CAA", options?: [ResolveDnsOptions](deno.resolvednsoptions)): Promise<[CAARecord](deno.caarecord)[]>` ### Parameters > `query: string` > `recordType: "CAA"` > `options?: [ResolveDnsOptions](deno.resolvednsoptions) optional` ### Return Type > `Promise<[CAARecord](deno.caarecord)[]>` > `resolveDns(query: string, recordType: "MX", options?: [ResolveDnsOptions](deno.resolvednsoptions)): Promise<[MXRecord](deno.mxrecord)[]>` ### Parameters > `query: string` > `recordType: "MX"` > `options?: [ResolveDnsOptions](deno.resolvednsoptions) optional` ### Return Type > `Promise<[MXRecord](deno.mxrecord)[]>` > `resolveDns(query: string, recordType: "NAPTR", options?: [ResolveDnsOptions](deno.resolvednsoptions)): Promise<[NAPTRRecord](deno.naptrrecord)[]>` ### Parameters > `query: string` > `recordType: "NAPTR"` > `options?: [ResolveDnsOptions](deno.resolvednsoptions) optional` ### Return Type > `Promise<[NAPTRRecord](deno.naptrrecord)[]>` > `resolveDns(query: string, recordType: "SOA", options?: [ResolveDnsOptions](deno.resolvednsoptions)): Promise<[SOARecord](deno.soarecord)[]>` ### Parameters > `query: string` > `recordType: "SOA"` > `options?: [ResolveDnsOptions](deno.resolvednsoptions) optional` ### Return Type > `Promise<[SOARecord](deno.soarecord)[]>` > `resolveDns(query: string, recordType: "SRV", options?: [ResolveDnsOptions](deno.resolvednsoptions)): Promise<[SRVRecord](deno.srvrecord)[]>` ### Parameters > `query: string` > `recordType: "SRV"` > `options?: [ResolveDnsOptions](deno.resolvednsoptions) optional` ### Return Type > `Promise<[SRVRecord](deno.srvrecord)[]>` > `resolveDns(query: string, recordType: "TXT", options?: [ResolveDnsOptions](deno.resolvednsoptions)): Promise<string[][]>` ### Parameters > `query: string` > `recordType: "TXT"` > `options?: [ResolveDnsOptions](deno.resolvednsoptions) optional` ### Return Type > `Promise<string[][]>` > `resolveDns(query: string, recordType: [RecordType](deno.recordtype), options?: [ResolveDnsOptions](deno.resolvednsoptions)): Promise<string[] | [CAARecord](deno.caarecord)[] | [MXRecord](deno.mxrecord)[] | [NAPTRRecord](deno.naptrrecord)[] | [SOARecord](deno.soarecord)[] | [SRVRecord](deno.srvrecord)[] | string[][]>` Performs DNS resolution against the given query, returning resolved records. Fails in the cases such as: * the query is in invalid format * the options have an invalid parameter, e.g. `nameServer.port` is beyond the range of 16-bit unsigned integer * timed out ``` const a = await Deno.resolveDns("example.com", "A"); const aaaa = await Deno.resolveDns("example.com", "AAAA", { nameServer: { ipAddr: "8.8.8.8", port: 53 }, }); ``` Requires `allow-net` permission. ### Parameters > `query: string` > `recordType: [RecordType](deno.recordtype)` > `options?: [ResolveDnsOptions](deno.resolvednsoptions) optional` ### Return Type > `Promise<string[] | [CAARecord](deno.caarecord)[] | [MXRecord](deno.mxrecord)[] | [NAPTRRecord](deno.naptrrecord)[] | [SOARecord](deno.soarecord)[] | [SRVRecord](deno.srvrecord)[] | string[][]>` deno Deno.readFileSync Deno.readFileSync ================= Synchronously reads and returns the entire contents of a file as an array of bytes. `TextDecoder` can be used to transform the bytes to string if required. Reading a directory returns an empty data array. ``` const decoder = new TextDecoder("utf-8"); const data = Deno.readFileSync("hello.txt"); console.log(decoder.decode(data)); ``` Requires `allow-read` permission. ``` function readFileSync(path: string | [URL](url)): Uint8Array; ``` > `readFileSync(path: string | [URL](url)): Uint8Array` ### Parameters > `path: string | [URL](url)` ### Return Type > `Uint8Array` deno WebAssembly.MemoryDescriptor WebAssembly.MemoryDescriptor ============================ The `MemoryDescriptor` describes the options you can pass to `new WebAssembly.Memory()`. ``` interface MemoryDescriptor {initial: number; maximum?: number; shared?: boolean; } ``` Properties ---------- > `initial: number` > `maximum?: number` > `shared?: boolean` deno Deno.seekSync Deno.seekSync ============= Synchronously seek a resource ID (`rid`) to the given `offset` under mode given by `whence`. The new position within the resource (bytes from the start) is returned. ``` const file = Deno.openSync( "hello.txt", { read: true, write: true, truncate: true, create: true }, ); Deno.writeSync(file.rid, new TextEncoder().encode("Hello world")); // advance cursor 6 bytes const cursorPosition = Deno.seekSync(file.rid, 6, Deno.SeekMode.Start); console.log(cursorPosition); // 6 const buf = new Uint8Array(100); file.readSync(buf); console.log(new TextDecoder().decode(buf)); // "world" file.close(); ``` The seek modes work as follows: ``` // Given file.rid pointing to file with "Hello world", which is 11 bytes long: const file = Deno.openSync( "hello.txt", { read: true, write: true, truncate: true, create: true }, ); Deno.writeSync(file.rid, new TextEncoder().encode("Hello world")); // Seek 6 bytes from the start of the file console.log(Deno.seekSync(file.rid, 6, Deno.SeekMode.Start)); // "6" // Seek 2 more bytes from the current position console.log(Deno.seekSync(file.rid, 2, Deno.SeekMode.Current)); // "8" // Seek backwards 2 bytes from the end of the file console.log(Deno.seekSync(file.rid, -2, Deno.SeekMode.End)); // "9" (e.g. 11-2) file.close(); ``` ``` function seekSync( rid: number, offset: number, whence: [SeekMode](deno.seekmode),): number; ``` > `seekSync(rid: number, offset: number, whence: [SeekMode](deno.seekmode)): number` ### Parameters > `rid: number` > `offset: number` > `whence: [SeekMode](deno.seekmode)` ### Return Type > `number` deno Deno.WriterSync Deno.WriterSync =============== An abstract interface which when implemented provides an interface to write bytes from an array buffer to a file/resource synchronously. ``` interface WriterSync {writeSync(p: Uint8Array): number; } ``` Methods ------- > `writeSync(p: Uint8Array): number` Writes `p.byteLength` bytes from `p` to the underlying data stream. It returns the number of bytes written from `p` (`0` <= `n` <= `p.byteLength`) and any error encountered that caused the write to stop early. `writeSync()` must throw a non-null error if it returns `n` < `p.byteLength`. `writeSync()` must not modify the slice data, even temporarily. Implementations should not retain a reference to `p`. deno GPUExtent3DDict GPUExtent3DDict =============== ``` interface GPUExtent3DDict {depthOrArrayLayers?: number; height?: number; width: number; } ``` Properties ---------- > `depthOrArrayLayers?: number` > `height?: number` > `width: number` deno GPUBlendOperation GPUBlendOperation ================= ``` type GPUBlendOperation = | "add" | "subtract" | "reverse-subtract" | "min" | "max"; ``` Type ---- > `"add" | "subtract" | "reverse-subtract" | "min" | "max"` deno Deno.Writer Deno.Writer =========== An abstract interface which when implemented provides an interface to write bytes from an array buffer to a file/resource asynchronously. ``` interface Writer {write(p: Uint8Array): Promise<number>; } ``` Methods ------- > `write(p: Uint8Array): Promise<number>` Writes `p.byteLength` bytes from `p` to the underlying data stream. It resolves to the number of bytes written from `p` (`0` <= `n` <= `p.byteLength`) or reject with the error encountered that caused the write to stop early. `write()` must reject with a non-null error if would resolve to `n` < `p.byteLength`. `write()` must not modify the slice data, even temporarily. Implementations should not retain a reference to `p`. deno GPUDevice GPUDevice ========= ``` class GPUDevice extends EventTarget implements [GPUObjectBase](gpuobjectbase) { readonly features: [GPUSupportedFeatures](gpusupportedfeatures); label: string; readonly limits: [GPUSupportedLimits](gpusupportedlimits); readonly lost: Promise<[GPUDeviceLostInfo](gpudevicelostinfo)>; onuncapturederror: ((this: [GPUDevice](gpudevice), ev: [GPUUncapturedErrorEvent](gpuuncapturederrorevent)) => any) | null; readonly queue: [GPUQueue](gpuqueue); createBindGroup(descriptor: [GPUBindGroupDescriptor](gpubindgroupdescriptor)): [GPUBindGroup](gpubindgroup); createBindGroupLayout(descriptor: [GPUBindGroupLayoutDescriptor](gpubindgrouplayoutdescriptor)): [GPUBindGroupLayout](gpubindgrouplayout); createBuffer(descriptor: [GPUBufferDescriptor](gpubufferdescriptor)): [GPUBuffer](gpubuffer); createCommandEncoder(descriptor?: [GPUCommandEncoderDescriptor](gpucommandencoderdescriptor)): [GPUCommandEncoder](gpucommandencoder); createComputePipeline(descriptor: [GPUComputePipelineDescriptor](gpucomputepipelinedescriptor)): [GPUComputePipeline](gpucomputepipeline); createComputePipelineAsync(descriptor: [GPUComputePipelineDescriptor](gpucomputepipelinedescriptor)): Promise<[GPUComputePipeline](gpucomputepipeline)>; createPipelineLayout(descriptor: [GPUPipelineLayoutDescriptor](gpupipelinelayoutdescriptor)): [GPUPipelineLayout](gpupipelinelayout); createQuerySet(descriptor: [GPUQuerySetDescriptor](gpuquerysetdescriptor)): [GPUQuerySet](gpuqueryset); createRenderBundleEncoder(descriptor: [GPURenderBundleEncoderDescriptor](gpurenderbundleencoderdescriptor)): [GPURenderBundleEncoder](gpurenderbundleencoder); createRenderPipeline(descriptor: [GPURenderPipelineDescriptor](gpurenderpipelinedescriptor)): [GPURenderPipeline](gpurenderpipeline); createRenderPipelineAsync(descriptor: [GPURenderPipelineDescriptor](gpurenderpipelinedescriptor)): Promise<[GPURenderPipeline](gpurenderpipeline)>; createSampler(descriptor?: [GPUSamplerDescriptor](gpusamplerdescriptor)): [GPUSampler](gpusampler); createShaderModule(descriptor: [GPUShaderModuleDescriptor](gpushadermoduledescriptor)): [GPUShaderModule](gpushadermodule); createTexture(descriptor: [GPUTextureDescriptor](gputexturedescriptor)): [GPUTexture](gputexture); destroy(): undefined; popErrorScope(): Promise<[GPUError](gpuerror) | null>; pushErrorScope(filter: [GPUErrorFilter](gpuerrorfilter)): undefined; } ``` Extends ------- > `EventTarget` Implements ---------- > `[GPUObjectBase](gpuobjectbase)` Properties ---------- > `features: [GPUSupportedFeatures](gpusupportedfeatures)` > `label: string` > `limits: [GPUSupportedLimits](gpusupportedlimits)` > `lost: Promise<[GPUDeviceLostInfo](gpudevicelostinfo)>` > `onuncapturederror: ((this: [GPUDevice](gpudevice), ev: [GPUUncapturedErrorEvent](gpuuncapturederrorevent)) => any) | null` > `queue: [GPUQueue](gpuqueue)` Methods ------- > `createBindGroup(descriptor: [GPUBindGroupDescriptor](gpubindgroupdescriptor)): [GPUBindGroup](gpubindgroup)` > `createBindGroupLayout(descriptor: [GPUBindGroupLayoutDescriptor](gpubindgrouplayoutdescriptor)): [GPUBindGroupLayout](gpubindgrouplayout)` > `createBuffer(descriptor: [GPUBufferDescriptor](gpubufferdescriptor)): [GPUBuffer](gpubuffer)` > `createCommandEncoder(descriptor?: [GPUCommandEncoderDescriptor](gpucommandencoderdescriptor)): [GPUCommandEncoder](gpucommandencoder)` > `createComputePipeline(descriptor: [GPUComputePipelineDescriptor](gpucomputepipelinedescriptor)): [GPUComputePipeline](gpucomputepipeline)` > `createComputePipelineAsync(descriptor: [GPUComputePipelineDescriptor](gpucomputepipelinedescriptor)): Promise<[GPUComputePipeline](gpucomputepipeline)>` > `createPipelineLayout(descriptor: [GPUPipelineLayoutDescriptor](gpupipelinelayoutdescriptor)): [GPUPipelineLayout](gpupipelinelayout)` > `createQuerySet(descriptor: [GPUQuerySetDescriptor](gpuquerysetdescriptor)): [GPUQuerySet](gpuqueryset)` > `createRenderBundleEncoder(descriptor: [GPURenderBundleEncoderDescriptor](gpurenderbundleencoderdescriptor)): [GPURenderBundleEncoder](gpurenderbundleencoder)` > `createRenderPipeline(descriptor: [GPURenderPipelineDescriptor](gpurenderpipelinedescriptor)): [GPURenderPipeline](gpurenderpipeline)` > `createRenderPipelineAsync(descriptor: [GPURenderPipelineDescriptor](gpurenderpipelinedescriptor)): Promise<[GPURenderPipeline](gpurenderpipeline)>` > `createSampler(descriptor?: [GPUSamplerDescriptor](gpusamplerdescriptor)): [GPUSampler](gpusampler)` > `createShaderModule(descriptor: [GPUShaderModuleDescriptor](gpushadermoduledescriptor)): [GPUShaderModule](gpushadermodule)` > `createTexture(descriptor: [GPUTextureDescriptor](gputexturedescriptor)): [GPUTexture](gputexture)` > `destroy(): undefined` > `popErrorScope(): Promise<[GPUError](gpuerror) | null>` > `pushErrorScope(filter: [GPUErrorFilter](gpuerrorfilter)): undefined`
programming_docs
deno CloseEventInit CloseEventInit ============== ``` interface CloseEventInit extends [EventInit](eventinit) {code?: number; reason?: string; wasClean?: boolean; } ``` Extends ------- > `[EventInit](eventinit)` Properties ---------- > `code?: number` > `reason?: string` > `wasClean?: boolean` deno Deno.writeSync Deno.writeSync ============== Synchronously write to the resource ID (`rid`) the contents of the array buffer (`data`). Returns the number of bytes written. This function is one of the lowest level APIs and most users should not work with this directly, but rather use [`writeAllSync()`](https://deno.land/std/streams/conversion.ts?s=writeAllSync) from [`std/streams/conversion.ts`](https://deno.land/std/streams/conversion.ts) instead. **It is not guaranteed that the full buffer will be written in a single call.** ``` const encoder = new TextEncoder(); const data = encoder.encode("Hello world"); const file = Deno.openSync("/foo/bar.txt", { write: true }); const bytesWritten = Deno.writeSync(file.rid, data); // 11 Deno.close(file.rid); ``` ``` function writeSync(rid: number, data: Uint8Array): number; ``` > `writeSync(rid: number, data: Uint8Array): number` ### Parameters > `rid: number` > `data: Uint8Array` ### Return Type > `number` deno Deno.chown Deno.chown ========== Change owner of a regular file or directory. This functionality is not available on Windows. ``` await Deno.chown("myFile.txt", 1000, 1002); ``` Requires `allow-write` permission. Throws Error (not implemented) if executed on Windows. ``` function chown( path: string | [URL](url), uid: number | null, gid: number | null,): Promise<void>; ``` > `chown(path: string | [URL](url), uid: number | null, gid: number | null): Promise<void>` ### Parameters > `path: string | [URL](url)` path to the file > `uid: number | null` user id (UID) of the new owner, or `null` for no change > `gid: number | null` group id (GID) of the new owner, or `null` for no change ### Return Type > `Promise<void>` deno GPUQuerySet GPUQuerySet =========== ``` class GPUQuerySet implements [GPUObjectBase](gpuobjectbase) { label: string; destroy(): undefined; } ``` Implements ---------- > `[GPUObjectBase](gpuobjectbase)` Properties ---------- > `label: string` Methods ------- > `destroy(): undefined` deno TransformStreamDefaultControllerCallback TransformStreamDefaultControllerCallback ======================================== ``` interface TransformStreamDefaultControllerCallback <O> {(controller: [TransformStreamDefaultController](transformstreamdefaultcontroller)<O>): void | PromiseLike<void>;} ``` Type Parameters --------------- > `O` Call Signatures --------------- > `(controller: [TransformStreamDefaultController](transformstreamdefaultcontroller)<O>): void | PromiseLike<void>` deno Deno.FfiPermissionDescriptor Deno.FfiPermissionDescriptor ============================ ``` interface FfiPermissionDescriptor {name: "ffi"; path?: string | [URL](url); } ``` Properties ---------- > `name: "ffi"` > `path?: string | [URL](url)` deno ProgressEventInit ProgressEventInit ================= ``` interface ProgressEventInit extends [EventInit](eventinit) {lengthComputable?: boolean; loaded?: number; total?: number; } ``` Extends ------- > `[EventInit](eventinit)` Properties ---------- > `lengthComputable?: boolean` > `loaded?: number` > `total?: number` deno TransformStreamDefaultController TransformStreamDefaultController ================================ ``` interface TransformStreamDefaultController <O = any> { readonly desiredSize: number | null; enqueue(chunk: O): void; error(reason?: any): void; terminate(): void; } ``` ``` var TransformStreamDefaultController: [TransformStreamDefaultController](transformstreamdefaultcontroller); ``` Type Parameters --------------- > `O = any` Properties ---------- > `readonly desiredSize: number | null` Methods ------- > `enqueue(chunk: O): void` > `error(reason?: any): void` > `terminate(): void` deno GPUBlendState GPUBlendState ============= ``` interface GPUBlendState {alpha: [GPUBlendComponent](gpublendcomponent); color: [GPUBlendComponent](gpublendcomponent); } ``` Properties ---------- > `alpha: [GPUBlendComponent](gpublendcomponent)` > `color: [GPUBlendComponent](gpublendcomponent)` deno GPURenderEncoderBase GPURenderEncoderBase ==================== ``` interface GPURenderEncoderBase {draw( vertexCount: number, instanceCount?: number, firstVertex?: number, firstInstance?: number,): undefined; drawIndexed( indexCount: number, instanceCount?: number, firstIndex?: number, baseVertex?: number, firstInstance?: number,): undefined; drawIndexedIndirect(indirectBuffer: [GPUBuffer](gpubuffer), indirectOffset: number): undefined; drawIndirect(indirectBuffer: [GPUBuffer](gpubuffer), indirectOffset: number): undefined; setIndexBuffer( buffer: [GPUBuffer](gpubuffer), indexFormat: [GPUIndexFormat](gpuindexformat), offset?: number, size?: number,): undefined; setPipeline(pipeline: [GPURenderPipeline](gpurenderpipeline)): undefined; setVertexBuffer( slot: number, buffer: [GPUBuffer](gpubuffer), offset?: number, size?: number,): undefined; } ``` Methods ------- > `draw( > vertexCount: number, > > instanceCount?: number, > > firstVertex?: number, > > firstInstance?: number,): undefined` > `drawIndexed( > indexCount: number, > > instanceCount?: number, > > firstIndex?: number, > > baseVertex?: number, > > firstInstance?: number,): undefined` > `drawIndexedIndirect(indirectBuffer: [GPUBuffer](gpubuffer), indirectOffset: number): undefined` > `drawIndirect(indirectBuffer: [GPUBuffer](gpubuffer), indirectOffset: number): undefined` > `setIndexBuffer( > buffer: [GPUBuffer](gpubuffer), > > indexFormat: [GPUIndexFormat](gpuindexformat), > > offset?: number, > > size?: number,): undefined` > `setPipeline(pipeline: [GPURenderPipeline](gpurenderpipeline)): undefined` > `setVertexBuffer( > slot: number, > > buffer: [GPUBuffer](gpubuffer), > > offset?: number, > > size?: number,): undefined` deno Deno.MkdirOptions Deno.MkdirOptions ================= Options which can be set when using [`Deno.mkdir`](deno#mkdir) and [`Deno.mkdirSync`](deno#mkdirSync). ``` interface MkdirOptions {mode?: number; recursive?: boolean; } ``` Properties ---------- > `mode?: number` Permissions to use when creating the directory (defaults to `0o777`, before the process's umask). Ignored on Windows. > `recursive?: boolean` Defaults to `false`. If set to `true`, means that any intermediate directories will also be created (as with the shell command `mkdir -p`). Intermediate directories are created with the same permissions. When recursive is set to `true`, succeeds silently (without changing any permissions) if a directory already exists at the path, or if the path is a symlink to an existing directory. deno WebAssembly.ModuleExportDescriptor WebAssembly.ModuleExportDescriptor ================================== A `ModuleExportDescriptor` is the description of a declared export in a `WebAssembly.Module`. ``` interface ModuleExportDescriptor {kind: [ImportExportKind](webassembly.importexportkind); name: string; } ``` Properties ---------- > `kind: [ImportExportKind](webassembly.importexportkind)` > `name: string` deno Deno.StartTlsOptions Deno.StartTlsOptions ==================== ``` interface StartTlsOptions {caCerts?: string[]; hostname?: string; } ``` Properties ---------- > `caCerts?: string[]` A list of root certificates that will be used in addition to the default root certificates to verify the peer's certificate. Must be in PEM format. > `hostname?: string` A literal IP address or host name that can be resolved to an IP address. If not specified, defaults to `127.0.0.1`. deno ReferrerPolicy ReferrerPolicy ============== ``` type ReferrerPolicy = | "" | "no-referrer" | "no-referrer-when-downgrade" | "origin" | "origin-when-cross-origin" | "same-origin" | "strict-origin" | "strict-origin-when-cross-origin" | "unsafe-url"; ``` Type ---- > `"" | "no-referrer" | "no-referrer-when-downgrade" | "origin" | "origin-when-cross-origin" | "same-origin" | "strict-origin" | "strict-origin-when-cross-origin" | "unsafe-url"` deno Deno.Reader Deno.Reader =========== An abstract interface which when implemented provides an interface to read bytes into an array buffer asynchronously. ``` interface Reader {read(p: Uint8Array): Promise<number | null>; } ``` Methods ------- > `read(p: Uint8Array): Promise<number | null>` Reads up to `p.byteLength` bytes into `p`. It resolves to the number of bytes read (`0` < `n` <= `p.byteLength`) and rejects if any error encountered. Even if `read()` resolves to `n` < `p.byteLength`, it may use all of `p` as scratch space during the call. If some data is available but not `p.byteLength` bytes, `read()` conventionally resolves to what is available instead of waiting for more. When `read()` encounters end-of-file condition, it resolves to EOF (`null`). When `read()` encounters an error, it rejects with an error. Callers should always process the `n` > `0` bytes returned before considering the EOF (`null`). Doing so correctly handles I/O errors that happen after reading some bytes and also both of the allowed EOF behaviors. Implementations should not retain a reference to `p`. Use [`itereateReader`](https://deno.land/std/streams/conversion.ts?s=iterateReader) from [`std/streams/conversion.ts`](https://deno.land/std/streams/conversion.ts) to turn a `Reader` into an {*@link* `AsyncIterator`}. deno GPUTextureDescriptor GPUTextureDescriptor ==================== ``` interface GPUTextureDescriptor extends [GPUObjectDescriptorBase](gpuobjectdescriptorbase) {dimension?: [GPUTextureDimension](gputexturedimension); format: [GPUTextureFormat](gputextureformat); mipLevelCount?: number; sampleCount?: number; size: [GPUExtent3D](gpuextent3d); usage: [GPUTextureUsageFlags](gputextureusageflags); } ``` Extends ------- > `[GPUObjectDescriptorBase](gpuobjectdescriptorbase)` Properties ---------- > `dimension?: [GPUTextureDimension](gputexturedimension)` > `format: [GPUTextureFormat](gputextureformat)` > `mipLevelCount?: number` > `sampleCount?: number` > `size: [GPUExtent3D](gpuextent3d)` > `usage: [GPUTextureUsageFlags](gputextureusageflags)` deno atob atob ==== Decodes a string of data which has been encoded using base-64 encoding. ``` console.log(atob("aGVsbG8gd29ybGQ=")); // outputs 'hello world' ``` ``` function atob(s: string): string; ``` > `atob(s: string): string` ### Parameters > `s: string` ### Return Type > `string` deno Deno.chmod Deno.chmod ========== Changes the permission of a specific file/directory of specified path. Ignores the process's umask. ``` await Deno.chmod("/path/to/file", 0o666); ``` The mode is a sequence of 3 octal numbers. The first/left-most number specifies the permissions for the owner. The second number specifies the permissions for the group. The last/right-most number specifies the permissions for others. For example, with a mode of 0o764, the owner (7) can read/write/execute, the group (6) can read/write and everyone else (4) can read only. | Number | Description | | --- | --- | | 7 | read, write, and execute | | 6 | read and write | | 5 | read and execute | | 4 | read only | | 3 | write and execute | | 2 | write only | | 1 | execute only | | 0 | no permission | NOTE: This API currently throws on Windows Requires `allow-write` permission. ``` function chmod(path: string | [URL](url), mode: number): Promise<void>; ``` > `chmod(path: string | [URL](url), mode: number): Promise<void>` ### Parameters > `path: string | [URL](url)` > `mode: number` ### Return Type > `Promise<void>` deno GPUAdapterInfo GPUAdapterInfo ============== ``` class GPUAdapterInfo { readonly architecture: string; readonly description: string; readonly device: string; readonly vendor: string; } ``` Properties ---------- > `architecture: string` > `description: string` > `device: string` > `vendor: string` deno Deno.listen Deno.listen =========== Listen announces on the local transport address. ``` const listener1 = Deno.listen({ port: 80 }) const listener2 = Deno.listen({ hostname: "192.0.2.1", port: 80 }) const listener3 = Deno.listen({ hostname: "[2001:db8::1]", port: 80 }); const listener4 = Deno.listen({ hostname: "golang.org", port: 80, transport: "tcp" }); ``` Requires `allow-net` permission. ``` function listen(options: [ListenOptions](deno.listenoptions) & {transport?: "tcp"; }): [Listener](deno.listener); ``` > `listen(options: [ListenOptions](deno.listenoptions) & {transport?: "tcp"; }): [Listener](deno.listener)` ### Parameters > `options: [ListenOptions](deno.listenoptions) & {transport?: "tcp"; }` ### Return Type > `[Listener](deno.listener)` deno GPUQueue GPUQueue ======== ``` class GPUQueue implements [GPUObjectBase](gpuobjectbase) { label: string; onSubmittedWorkDone(): Promise<undefined>; submit(commandBuffers: [GPUCommandBuffer](gpucommandbuffer)[]): undefined; writeBuffer( buffer: [GPUBuffer](gpubuffer), bufferOffset: number, data: [BufferSource](buffersource), dataOffset?: number, size?: number,): undefined; writeTexture( destination: [GPUImageCopyTexture](gpuimagecopytexture), data: [BufferSource](buffersource), dataLayout: [GPUImageDataLayout](gpuimagedatalayout), size: [GPUExtent3D](gpuextent3d),): undefined; } ``` Implements ---------- > `[GPUObjectBase](gpuobjectbase)` Properties ---------- > `label: string` Methods ------- > `onSubmittedWorkDone(): Promise<undefined>` > `submit(commandBuffers: [GPUCommandBuffer](gpucommandbuffer)[]): undefined` > `writeBuffer(buffer: [GPUBuffer](gpubuffer), bufferOffset: number, data: [BufferSource](buffersource), dataOffset?: number, size?: number): undefined` > `writeTexture(destination: [GPUImageCopyTexture](gpuimagecopytexture), data: [BufferSource](buffersource), dataLayout: [GPUImageDataLayout](gpuimagedatalayout), size: [GPUExtent3D](gpuextent3d)): undefined` deno Deno.errors.AlreadyExists Deno.errors.AlreadyExists ========================= Raised when trying to create a resource, like a file, that already exits. ``` class AlreadyExists extends Error { } ``` Extends ------- > `Error` deno MessageChannel MessageChannel ============== The MessageChannel interface of the Channel Messaging API allows us to create a new message channel and send data through it via its two MessagePort properties. ``` class MessageChannel { constructor(); readonly port1: [MessagePort](messageport); readonly port2: [MessagePort](messageport); } ``` Constructors ------------ > `new MessageChannel()` Properties ---------- > `port1: [MessagePort](messageport)` > `port2: [MessagePort](messageport)` deno Deno.chmodSync Deno.chmodSync ============== Synchronously changes the permission of a specific file/directory of specified path. Ignores the process's umask. ``` Deno.chmodSync("/path/to/file", 0o666); ``` For a full description, see [`Deno.chmod`](deno#chmod). NOTE: This API currently throws on Windows Requires `allow-write` permission. ``` function chmodSync(path: string | [URL](url), mode: number): void; ``` > `chmodSync(path: string | [URL](url), mode: number): void` ### Parameters > `path: string | [URL](url)` > `mode: number` ### Return Type > `void` deno RequestInit RequestInit =========== ``` interface RequestInit {body?: [BodyInit](bodyinit) | null; cache?: [RequestCache](requestcache); credentials?: [RequestCredentials](requestcredentials); headers?: [HeadersInit](headersinit); integrity?: string; keepalive?: boolean; method?: string; mode?: [RequestMode](requestmode); redirect?: [RequestRedirect](requestredirect); referrer?: string; referrerPolicy?: [ReferrerPolicy](referrerpolicy); signal?: [AbortSignal](abortsignal) | null; window?: any; } ``` Properties ---------- > `body?: [BodyInit](bodyinit) | null` A BodyInit object or null to set request's body. > `cache?: [RequestCache](requestcache)` A string indicating how the request will interact with the browser's cache to set request's cache. > `credentials?: [RequestCredentials](requestcredentials)` A string indicating whether credentials will be sent with the request always, never, or only when sent to a same-origin URL. Sets request's credentials. > `headers?: [HeadersInit](headersinit)` A Headers object, an object literal, or an array of two-item arrays to set request's headers. > `integrity?: string` A cryptographic hash of the resource to be fetched by request. Sets request's integrity. > `keepalive?: boolean` A boolean to set request's keepalive. > `method?: string` A string to set request's method. > `mode?: [RequestMode](requestmode)` A string to indicate whether the request will use CORS, or will be restricted to same-origin URLs. Sets request's mode. > `redirect?: [RequestRedirect](requestredirect)` A string indicating whether request follows redirects, results in an error upon encountering a redirect, or returns the redirect (in an opaque fashion). Sets request's redirect. > `referrer?: string` A string whose value is a same-origin URL, "about:client", or the empty string, to set request's referrer. > `referrerPolicy?: [ReferrerPolicy](referrerpolicy)` A referrer policy to set request's referrerPolicy. > `signal?: [AbortSignal](abortsignal) | null` An AbortSignal to set request's signal. > `window?: any` Can only be null. Used to disassociate request from any Window. deno Deno.readDirSync Deno.readDirSync ================ Synchronously reads the directory given by `path` and returns an iterable of `Deno.DirEntry`. ``` for (const dirEntry of Deno.readDirSync("/")) { console.log(dirEntry.name); } ``` Throws error if `path` is not a directory. Requires `allow-read` permission. ``` function readDirSync(path: string | [URL](url)): Iterable<[DirEntry](deno.direntry)>; ``` > `readDirSync(path: string | [URL](url)): Iterable<[DirEntry](deno.direntry)>` ### Parameters > `path: string | [URL](url)` ### Return Type > `Iterable<[DirEntry](deno.direntry)>` deno Deno.build Deno.build ========== Build related information. ``` const build: {target: string; arch: "x86\_64" | "aarch64"; os: "darwin" | "linux" | "windows"; vendor: string; env?: string; }; ``` deno removeEventListener removeEventListener =================== Remove a previously registered event listener from the global scope ``` const listener = () => { console.log('hello'); }; addEventListener('load', listener); removeEventListener('load', listener); ``` ``` function removeEventListener<K extends keyof [WindowEventMap](windoweventmap)>( type: K, listener: (this: [Window](window), ev: [WindowEventMap](windoweventmap)[K]) => any, options?: boolean | [EventListenerOptions](eventlisteneroptions),): void; function removeEventListener( type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [EventListenerOptions](eventlisteneroptions),): void; ``` > `removeEventListener<K extends keyof [WindowEventMap](windoweventmap)>(type: K, listener: (this: [Window](window), ev: [WindowEventMap](windoweventmap)[K]) => any, options?: boolean | [EventListenerOptions](eventlisteneroptions)): void` Remove a previously registered event listener from the global scope ``` const listener = () => { console.log('hello'); }; addEventListener('load', listener); removeEventListener('load', listener); ``` ### Type Parameters > `K extends keyof [WindowEventMap](windoweventmap)` ### Parameters > `type: K` > `listener: (this: [Window](window), ev: [WindowEventMap](windoweventmap)[K]) => any` > `options?: boolean | [EventListenerOptions](eventlisteneroptions) optional` ### Return Type > `void` > `removeEventListener(type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [EventListenerOptions](eventlisteneroptions)): void` ### Parameters > `type: string` > `listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject)` > `options?: boolean | [EventListenerOptions](eventlisteneroptions) optional` ### Return Type > `void`
programming_docs
deno JsonWebKey JsonWebKey ========== ``` interface JsonWebKey {alg?: string; crv?: string; d?: string; dp?: string; dq?: string; e?: string; ext?: boolean; k?: string; key\_ops?: string[]; kty?: string; n?: string; oth?: [RsaOtherPrimesInfo](rsaotherprimesinfo)[]; p?: string; q?: string; qi?: string; use?: string; x?: string; y?: string; } ``` Properties ---------- > `alg?: string` > `crv?: string` > `d?: string` > `dp?: string` > `dq?: string` > `e?: string` > `ext?: boolean` > `k?: string` > `key_ops?: string[]` > `kty?: string` > `n?: string` > `oth?: [RsaOtherPrimesInfo](rsaotherprimesinfo)[]` > `p?: string` > `q?: string` > `qi?: string` > `use?: string` > `x?: string` > `y?: string` deno ErrorEventInit ErrorEventInit ============== ``` interface ErrorEventInit extends [EventInit](eventinit) {colno?: number; error?: any; filename?: string; lineno?: number; message?: string; } ``` Extends ------- > `[EventInit](eventinit)` Properties ---------- > `colno?: number` > `error?: any` > `filename?: string` > `lineno?: number` > `message?: string` deno GPUTextureBindingLayout GPUTextureBindingLayout ======================= ``` interface GPUTextureBindingLayout {multisampled?: boolean; sampleType?: [GPUTextureSampleType](gputexturesampletype); viewDimension?: [GPUTextureViewDimension](gputextureviewdimension); } ``` Properties ---------- > `multisampled?: boolean` > `sampleType?: [GPUTextureSampleType](gputexturesampletype)` > `viewDimension?: [GPUTextureViewDimension](gputextureviewdimension)` deno BufferSource BufferSource ============ ``` type BufferSource = ArrayBufferView | ArrayBuffer; ``` Type ---- > `ArrayBufferView | ArrayBuffer` deno Deno.ReaderSync Deno.ReaderSync =============== An abstract interface which when implemented provides an interface to read bytes into an array buffer synchronously. ``` interface ReaderSync {readSync(p: Uint8Array): number | null; } ``` Methods ------- > `readSync(p: Uint8Array): number | null` Reads up to `p.byteLength` bytes into `p`. It resolves to the number of bytes read (`0` < `n` <= `p.byteLength`) and rejects if any error encountered. Even if `readSync()` returns `n` < `p.byteLength`, it may use all of `p` as scratch space during the call. If some data is available but not `p.byteLength` bytes, `readSync()` conventionally returns what is available instead of waiting for more. When `readSync()` encounters end-of-file condition, it returns EOF (`null`). When `readSync()` encounters an error, it throws with an error. Callers should always process the `n` > `0` bytes returned before considering the EOF (`null`). Doing so correctly handles I/O errors that happen after reading some bytes and also both of the allowed EOF behaviors. Implementations should not retain a reference to `p`. Use [`itereateReaderSync`](https://deno.land/std/streams/conversion.ts?s=iterateReaderSync) from from [`std/streams/conversion.ts`](https://deno.land/std/streams/conversion.ts) to turn a `ReaderSync` into an {*@link* `Iterator`}. deno GPUUncapturedErrorEventInit GPUUncapturedErrorEventInit =========================== ``` interface GPUUncapturedErrorEventInit extends [EventInit](eventinit) {error?: [GPUError](gpuerror); } ``` Extends ------- > `[EventInit](eventinit)` Properties ---------- > `error?: [GPUError](gpuerror)` deno RsaOtherPrimesInfo RsaOtherPrimesInfo ================== ``` interface RsaOtherPrimesInfo {d?: string; r?: string; t?: string; } ``` Properties ---------- > `d?: string` > `r?: string` > `t?: string` deno addEventListener addEventListener ================ Registers an event listener in the global scope, which will be called synchronously whenever the event `type` is dispatched. ``` addEventListener('unload', () => { console.log('All finished!'); }); ... dispatchEvent(new Event('unload')); ``` ``` function addEventListener<K extends keyof [WindowEventMap](windoweventmap)>( type: K, listener: (this: [Window](window), ev: [WindowEventMap](windoweventmap)[K]) => any, options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void; function addEventListener( type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [AddEventListenerOptions](addeventlisteneroptions),): void; ``` > `addEventListener<K extends keyof [WindowEventMap](windoweventmap)>(type: K, listener: (this: [Window](window), ev: [WindowEventMap](windoweventmap)[K]) => any, options?: boolean | [AddEventListenerOptions](addeventlisteneroptions)): void` Registers an event listener in the global scope, which will be called synchronously whenever the event `type` is dispatched. ``` addEventListener('unload', () => { console.log('All finished!'); }); ... dispatchEvent(new Event('unload')); ``` ### Type Parameters > `K extends keyof [WindowEventMap](windoweventmap)` ### Parameters > `type: K` > `listener: (this: [Window](window), ev: [WindowEventMap](windoweventmap)[K]) => any` > `options?: boolean | [AddEventListenerOptions](addeventlisteneroptions) optional` ### Return Type > `void` > `addEventListener(type: string, listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject), options?: boolean | [AddEventListenerOptions](addeventlisteneroptions)): void` ### Parameters > `type: string` > `listener: [EventListenerOrEventListenerObject](eventlisteneroreventlistenerobject)` > `options?: boolean | [AddEventListenerOptions](addeventlisteneroptions) optional` ### Return Type > `void` deno Deno.errors.BrokenPipe Deno.errors.BrokenPipe ====================== Raised when trying to write to a resource and a broken pipe error occurs. This can happen when trying to write directly to `stdout` or `stderr` and the operating system is unable to pipe the output for a reason external to the Deno runtime. ``` class BrokenPipe extends Error { } ``` Extends ------- > `Error` deno WebAssembly.CompileError WebAssembly.CompileError ======================== The `WebAssembly.CompileError` object indicates an error during WebAssembly decoding or validation. [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WebAssembly/CompileError) ``` class CompileError extends Error { constructor(message?: string, options?: ErrorOptions);} ``` Extends ------- > `Error` Constructors ------------ > `new CompileError(message?: string, options?: ErrorOptions)` Creates a new `WebAssembly.CompileError` object. deno ReadableStreamBYOBRequest ReadableStreamBYOBRequest ========================= ``` interface ReadableStreamBYOBRequest { readonly view: ArrayBufferView | null; respond(bytesWritten: number): void; respondWithNewView(view: ArrayBufferView): void; } ``` Properties ---------- > `readonly view: ArrayBufferView | null` Methods ------- > `respond(bytesWritten: number): void` > `respondWithNewView(view: ArrayBufferView): void` deno Deno.ProcessStatus Deno.ProcessStatus ================== The status resolved from the `.status()` method of a [`Deno.Process`](deno#Process) instance. If `success` is `true`, then `code` will be `0`, but if `success` is `false`, the sub-process exit code will be set in `code`. ``` type ProcessStatus = {success: true; code: 0; signal?: undefined; } | {success: false; code: number; signal?: number; }; ``` Type ---- > `{success: true; > code: 0; > signal?: undefined; } | {success: false; > code: number; > signal?: number; }` deno KeyFormat KeyFormat ========= ``` type KeyFormat = | "jwk" | "pkcs8" | "raw" | "spki"; ``` Type ---- > `"jwk" | "pkcs8" | "raw" | "spki"` deno WebAssembly.ExportValue WebAssembly.ExportValue ======================= ``` type ExportValue = | Function | [Global](webassembly.global) | [Memory](webassembly.memory) | [Table](webassembly.table); ``` Type ---- > `Function | [Global](webassembly.global) | [Memory](webassembly.memory) | [Table](webassembly.table)` deno Deno.renameSync Deno.renameSync =============== Synchronously renames (moves) `oldpath` to `newpath`. Paths may be files or directories. If `newpath` already exists and is not a directory, `renameSync()` replaces it. OS-specific restrictions may apply when `oldpath` and `newpath` are in different directories. ``` Deno.renameSync("old/path", "new/path"); ``` On Unix-like OSes, this operation does not follow symlinks at either path. It varies between platforms when the operation throws errors, and if so what they are. It's always an error to rename anything to a non-empty directory. Requires `allow-read` and `allow-write` permissions. ``` function renameSync(oldpath: string | [URL](url), newpath: string | [URL](url)): void; ``` > `renameSync(oldpath: string | [URL](url), newpath: string | [URL](url)): void` ### Parameters > `oldpath: string | [URL](url)` > `newpath: string | [URL](url)` ### Return Type > `void` deno GPUPrimitiveTopology GPUPrimitiveTopology ==================== ``` type GPUPrimitiveTopology = | "point-list" | "line-list" | "line-strip" | "triangle-list" | "triangle-strip"; ``` Type ---- > `"point-list" | "line-list" | "line-strip" | "triangle-list" | "triangle-strip"` deno ErrorConstructor ErrorConstructor ================ ``` interface ErrorConstructor {captureStackTrace(error: Object, constructor?: Function): void; } ``` Methods ------- > `captureStackTrace(error: Object, constructor?: Function): void` See <https://v8.dev/docs/stack-trace-api#stack-trace-collection-for-custom-exceptions>. deno GPUBlendComponent GPUBlendComponent ================= ``` interface GPUBlendComponent {dstFactor?: [GPUBlendFactor](gpublendfactor); operation?: [GPUBlendOperation](gpublendoperation); srcFactor?: [GPUBlendFactor](gpublendfactor); } ``` Properties ---------- > `dstFactor?: [GPUBlendFactor](gpublendfactor)` > `operation?: [GPUBlendOperation](gpublendoperation)` > `srcFactor?: [GPUBlendFactor](gpublendfactor)` deno Deno.iterSync Deno.iterSync ============= deprecated Turns a ReaderSync, `r`, into an iterator. @deprecated Use [`iterateReaderSync`](https://deno.land/std/streams/conversion.ts?s=iterateReaderSync) from [`std/streams/conversion.ts`](https://deno.land/std/streams/conversion.ts) instead. `Deno.iterSync` will be removed in the future. ``` function iterSync(r: [ReaderSync](deno.readersync), options?: {bufSize?: number; }): IterableIterator<Uint8Array>; ``` > `iterSync(r: [ReaderSync](deno.readersync), options?: {bufSize?: number; }): IterableIterator<Uint8Array>` ### Parameters > `r: [ReaderSync](deno.readersync)` > `options?: {bufSize?: number; } optional` ### Return Type > `IterableIterator<Uint8Array>` deno PerformanceMeasure PerformanceMeasure ================== `PerformanceMeasure` is an abstract interface for `PerformanceEntry` objects with an entryType of `"measure"`. Entries of this type are created by calling `performance.measure()` to add a named `DOMHighResTimeStamp` (the measure) between two marks to the performance timeline. ``` class PerformanceMeasure extends PerformanceEntry { readonly detail: any; readonly entryType: "measure"; } ``` Extends ------- > `PerformanceEntry` Properties ---------- > `detail: any` > `entryType: "measure"` deno GPUProgrammablePassEncoder GPUProgrammablePassEncoder ========================== ``` interface GPUProgrammablePassEncoder {insertDebugMarker(markerLabel: string): undefined; popDebugGroup(): undefined; pushDebugGroup(groupLabel: string): undefined; setBindGroup( index: number, bindGroup: [GPUBindGroup](gpubindgroup), dynamicOffsets?: number[],): undefined; setBindGroup( index: number, bindGroup: [GPUBindGroup](gpubindgroup), dynamicOffsetsData: Uint32Array, dynamicOffsetsDataStart: number, dynamicOffsetsDataLength: number,): undefined; } ``` Methods ------- > `insertDebugMarker(markerLabel: string): undefined` > `popDebugGroup(): undefined` > `pushDebugGroup(groupLabel: string): undefined` > `setBindGroup( > index: number, > > bindGroup: [GPUBindGroup](gpubindgroup), > > dynamicOffsets?: number[],): undefined` > `setBindGroup( > index: number, > > bindGroup: [GPUBindGroup](gpubindgroup), > > dynamicOffsetsData: Uint32Array, > > dynamicOffsetsDataStart: number, > > dynamicOffsetsDataLength: number,): undefined` deno onunhandledrejection onunhandledrejection ==================== ``` var onunhandledrejection: ((this: [Window](window), ev: [PromiseRejectionEvent](promiserejectionevent)) => any) | null; ``` deno PromiseRejectionEventInit PromiseRejectionEventInit ========================= ``` interface PromiseRejectionEventInit extends [EventInit](eventinit) {promise: Promise<any>; reason?: any; } ``` Extends ------- > `[EventInit](eventinit)` Properties ---------- > `promise: Promise<any>` > `reason?: any` deno GPUSampler GPUSampler ========== ``` class GPUSampler implements [GPUObjectBase](gpuobjectbase) { label: string; } ``` Implements ---------- > `[GPUObjectBase](gpuobjectbase)` Properties ---------- > `label: string` deno Deno.removeSignalListener Deno.removeSignalListener ========================= Removes the given signal listener that has been registered with [`Deno.addSignalListener`](deno#addSignalListener). ``` const listener = () => { console.log("SIGTERM!") }; Deno.addSignalListener("SIGTERM", listener); Deno.removeSignalListener("SIGTERM", listener); ``` *Note*: On Windows only `"SIGINT"` (CTRL+C) and `"SIGBREAK"` (CTRL+Break) are supported. ``` function removeSignalListener(signal: [Signal](deno.signal), handler: () => void): void; ``` > `removeSignalListener(signal: [Signal](deno.signal), handler: () => void): void` ### Parameters > `signal: [Signal](deno.signal)` > `handler: () => void` ### Return Type > `void` deno URLPatternComponentResult URLPatternComponentResult ========================= ``` interface URLPatternComponentResult {groups: Record<string, string>; input: string; } ``` Properties ---------- > `groups: Record<string, string>` > `input: string` deno Deno.ConnectTlsOptions Deno.ConnectTlsOptions ====================== ``` interface ConnectTlsOptions {caCerts?: string[]; certFile?: string; hostname?: string; port: number; } ``` Properties ---------- > `caCerts?: string[]` A list of root certificates that will be used in addition to the default root certificates to verify the peer's certificate. Must be in PEM format. > `certFile?: string` Server certificate file. > `hostname?: string` A literal IP address or host name that can be resolved to an IP address. If not specified, defaults to `127.0.0.1`. > `port: number` The port to connect to. deno GPUProgrammableStage GPUProgrammableStage ==================== ``` interface GPUProgrammableStage {entryPoint: string; module: [GPUShaderModule](gpushadermodule); } ``` Properties ---------- > `entryPoint: string` > `module: [GPUShaderModule](gpushadermodule)` deno Deno.create Deno.create =========== Creates a file if none exists or truncates an existing file and resolves to an instance of [`Deno.FsFile`](deno#FsFile). ``` const file = await Deno.create("/foo/bar.txt"); ``` Requires `allow-read` and `allow-write` permissions. ``` function create(path: string | [URL](url)): Promise<[FsFile](deno.fsfile)>; ``` > `create(path: string | [URL](url)): Promise<[FsFile](deno.fsfile)>` ### Parameters > `path: string | [URL](url)` ### Return Type > `Promise<[FsFile](deno.fsfile)>` deno Deno.execPath Deno.execPath ============= Returns the path to the current deno executable. ``` console.log(Deno.execPath()); // e.g. "/home/alice/.local/bin/deno" ``` Requires `allow-read` permission. ``` function execPath(): string; ``` > `execPath(): string` ### Return Type > `string` deno Deno.writeFile Deno.writeFile ============== Write `data` to the given `path`, by default creating a new file if needed, else overwriting. ``` const encoder = new TextEncoder(); const data = encoder.encode("Hello world\n"); await Deno.writeFile("hello1.txt", data); // overwrite "hello1.txt" or create it await Deno.writeFile("hello2.txt", data, { create: false }); // only works if "hello2.txt" exists await Deno.writeFile("hello3.txt", data, { mode: 0o777 }); // set permissions on new file await Deno.writeFile("hello4.txt", data, { append: true }); // add data to the end of the file ``` Requires `allow-write` permission, and `allow-read` if `options.create` is `false`. ``` function writeFile( path: string | [URL](url), data: Uint8Array, options?: [WriteFileOptions](deno.writefileoptions),): Promise<void>; ``` > `writeFile(path: string | [URL](url), data: Uint8Array, options?: [WriteFileOptions](deno.writefileoptions)): Promise<void>` ### Parameters > `path: string | [URL](url)` > `data: Uint8Array` > `options?: [WriteFileOptions](deno.writefileoptions) optional` ### Return Type > `Promise<void>` deno GPUImageDataLayout GPUImageDataLayout ================== ``` interface GPUImageDataLayout {bytesPerRow?: number; offset?: number; rowsPerImage?: number; } ``` Properties ---------- > `bytesPerRow?: number` > `offset?: number` > `rowsPerImage?: number` deno Deno.readLinkSync Deno.readLinkSync ================= Synchronously returns the full path destination of the named symbolic link. ``` Deno.symlinkSync("./test.txt", "./test\_link.txt"); const target = Deno.readLinkSync("./test\_link.txt"); // full path of ./test.txt ``` Throws TypeError if called with a hard link. Requires `allow-read` permission. ``` function readLinkSync(path: string | [URL](url)): string; ``` > `readLinkSync(path: string | [URL](url)): string` ### Parameters > `path: string | [URL](url)` ### Return Type > `string` deno GPUShaderModuleDescriptor GPUShaderModuleDescriptor ========================= ``` interface GPUShaderModuleDescriptor extends [GPUObjectDescriptorBase](gpuobjectdescriptorbase) {code: string; sourceMap?: any; } ``` Extends ------- > `[GPUObjectDescriptorBase](gpuobjectdescriptorbase)` Properties ---------- > `code: string` > `sourceMap?: any` deno GPUImageCopyTexture GPUImageCopyTexture =================== ``` interface GPUImageCopyTexture {aspect?: [GPUTextureAspect](gputextureaspect); mipLevel?: number; origin?: [GPUOrigin3D](gpuorigin3d); texture: [GPUTexture](gputexture); } ``` Properties ---------- > `aspect?: [GPUTextureAspect](gputextureaspect)` > `mipLevel?: number` > `origin?: [GPUOrigin3D](gpuorigin3d)` > `texture: [GPUTexture](gputexture)` deno Pbkdf2Params Pbkdf2Params ============ ``` interface Pbkdf2Params extends [Algorithm](algorithm) {hash: [HashAlgorithmIdentifier](hashalgorithmidentifier); iterations: number; salt: [BufferSource](buffersource); } ``` Extends ------- > `[Algorithm](algorithm)` Properties ---------- > `hash: [HashAlgorithmIdentifier](hashalgorithmidentifier)` > `iterations: number` > `salt: [BufferSource](buffersource)` deno Deno.EnvPermissionDescriptor Deno.EnvPermissionDescriptor ============================ ``` interface EnvPermissionDescriptor {name: "env"; variable?: string; } ``` Properties ---------- > `name: "env"` > `variable?: string` deno GPUColorTargetState GPUColorTargetState =================== ``` interface GPUColorTargetState {blend?: [GPUBlendState](gpublendstate); format: [GPUTextureFormat](gputextureformat); writeMask?: [GPUColorWriteFlags](gpucolorwriteflags); } ``` Properties ---------- > `blend?: [GPUBlendState](gpublendstate)` > `format: [GPUTextureFormat](gputextureformat)` > `writeMask?: [GPUColorWriteFlags](gpucolorwriteflags)`
programming_docs
deno AesCbcParams AesCbcParams ============ ``` interface AesCbcParams extends [Algorithm](algorithm) {iv: [BufferSource](buffersource); } ``` Extends ------- > `[Algorithm](algorithm)` Properties ---------- > `iv: [BufferSource](buffersource)` deno Deno.NetAddr Deno.NetAddr ============ ``` interface NetAddr {hostname: string; port: number; transport: "tcp" | "udp"; } ``` Properties ---------- > `hostname: string` > `port: number` > `transport: "tcp" | "udp"` deno WebAssembly.RuntimeError WebAssembly.RuntimeError ======================== The `WebAssembly.RuntimeError` object is the error type that is thrown whenever WebAssembly specifies a trap. [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WebAssembly/RuntimeError) ``` class RuntimeError extends Error { constructor(message?: string, options?: ErrorOptions);} ``` Extends ------- > `Error` Constructors ------------ > `new RuntimeError(message?: string, options?: ErrorOptions)` Creates a new `WebAssembly.RuntimeError` object. deno GPUDeviceDescriptor GPUDeviceDescriptor =================== ``` interface GPUDeviceDescriptor extends [GPUObjectDescriptorBase](gpuobjectdescriptorbase) {requiredFeatures?: [GPUFeatureName](gpufeaturename)[]; requiredLimits?: Record<string, number>; } ``` Extends ------- > `[GPUObjectDescriptorBase](gpuobjectdescriptorbase)` Properties ---------- > `requiredFeatures?: [GPUFeatureName](gpufeaturename)[]` > `requiredLimits?: Record<string, number>` deno WritableStreamDefaultWriter WritableStreamDefaultWriter =========================== This Streams API interface is the object returned by WritableStream.getWriter() and once created locks the < writer to the WritableStream ensuring that no other streams can write to the underlying sink. ``` interface WritableStreamDefaultWriter <W = any> { readonly closed: Promise<void>; readonly desiredSize: number | null; readonly ready: Promise<void>; abort(reason?: any): Promise<void>; close(): Promise<void>; releaseLock(): void; write(chunk: W): Promise<void>; } ``` ``` var WritableStreamDefaultWriter: {prototype: [WritableStreamDefaultWriter](writablestreamdefaultwriter); new (): [WritableStreamDefaultWriter](writablestreamdefaultwriter); }; ``` Type Parameters --------------- > `W = any` Properties ---------- > `readonly closed: Promise<void>` > `readonly desiredSize: number | null` > `readonly ready: Promise<void>` Methods ------- > `abort(reason?: any): Promise<void>` > `close(): Promise<void>` > `releaseLock(): void` > `write(chunk: W): Promise<void>` deno RsaHashedKeyGenParams RsaHashedKeyGenParams ===================== ``` interface RsaHashedKeyGenParams extends [RsaKeyGenParams](rsakeygenparams) {hash: [HashAlgorithmIdentifier](hashalgorithmidentifier); } ``` Extends ------- > `[RsaKeyGenParams](rsakeygenparams)` Properties ---------- > `hash: [HashAlgorithmIdentifier](hashalgorithmidentifier)` deno AesKeyGenParams AesKeyGenParams =============== ``` interface AesKeyGenParams extends [Algorithm](algorithm) {length: number; } ``` Extends ------- > `[Algorithm](algorithm)` Properties ---------- > `length: number` deno GPURequestAdapterOptions GPURequestAdapterOptions ======================== ``` interface GPURequestAdapterOptions {forceFallbackAdapter?: boolean; powerPreference?: [GPUPowerPreference](gpupowerpreference); } ``` Properties ---------- > `forceFallbackAdapter?: boolean` > `powerPreference?: [GPUPowerPreference](gpupowerpreference)` deno GPUTextureUsage GPUTextureUsage =============== ``` class GPUTextureUsage { static COPY\_DST: 2; static COPY\_SRC: 1; static RENDER\_ATTACHMENT: 16; static STORAGE\_BINDING: 8; static TEXTURE\_BINDING: 4; } ``` Static Properties ----------------- > `COPY_DST: 2` > `COPY_SRC: 1` > `RENDER_ATTACHMENT: 16` > `STORAGE_BINDING: 8` > `TEXTURE_BINDING: 4` deno GPUBindGroup GPUBindGroup ============ ``` class GPUBindGroup implements [GPUObjectBase](gpuobjectbase) { label: string; } ``` Implements ---------- > `[GPUObjectBase](gpuobjectbase)` Properties ---------- > `label: string` deno Deno.hostname Deno.hostname ============= Get the `hostname` of the machine the Deno process is running on. ``` console.log(Deno.hostname()); ``` Requires `allow-sys` permission. ``` function hostname(): string; ``` > `hostname(): string` ### Return Type > `string` deno GPURenderBundleDescriptor GPURenderBundleDescriptor ========================= ``` interface GPURenderBundleDescriptor extends [GPUObjectDescriptorBase](gpuobjectdescriptorbase) {} ``` Extends ------- > `[GPUObjectDescriptorBase](gpuobjectdescriptorbase)` deno TextEncoder TextEncoder =========== ``` interface TextEncoder { readonly encoding: "utf-8"; encode(input?: string): Uint8Array; encodeInto(input: string, dest: Uint8Array): [TextEncoderEncodeIntoResult](textencoderencodeintoresult); } ``` ``` var TextEncoder: {prototype: [TextEncoder](textencoder); new (): [TextEncoder](textencoder); }; ``` Properties ---------- > `readonly encoding: "utf-8"` Returns "utf-8". Methods ------- > `encode(input?: string): Uint8Array` Returns the result of running UTF-8's encoder. > `encodeInto(input: string, dest: Uint8Array): [TextEncoderEncodeIntoResult](textencoderencodeintoresult)` deno GPUPipelineBase GPUPipelineBase =============== ``` interface GPUPipelineBase {getBindGroupLayout(index: number): [GPUBindGroupLayout](gpubindgrouplayout); } ``` Methods ------- > `getBindGroupLayout(index: number): [GPUBindGroupLayout](gpubindgrouplayout)` deno Deno.connectTls Deno.connectTls =============== Establishes a secure connection over TLS (transport layer security) using an optional cert file, hostname (default is "127.0.0.1") and port. The cert file is optional and if not included Mozilla's root certificates will be used (see also <https://github.com/ctz/webpki-roots> for specifics) ``` const caCert = await Deno.readTextFile("./certs/my\_custom\_root\_CA.pem"); const conn1 = await Deno.connectTls({ port: 80 }); const conn2 = await Deno.connectTls({ caCerts: [caCert], hostname: "192.0.2.1", port: 80 }); const conn3 = await Deno.connectTls({ hostname: "[2001:db8::1]", port: 80 }); const conn4 = await Deno.connectTls({ caCerts: [caCert], hostname: "golang.org", port: 80}); ``` Requires `allow-net` permission. ``` function connectTls(options: [ConnectTlsOptions](deno.connecttlsoptions)): Promise<[TlsConn](deno.tlsconn)>; ``` > `connectTls(options: [ConnectTlsOptions](deno.connecttlsoptions)): Promise<[TlsConn](deno.tlsconn)>` ### Parameters > `options: [ConnectTlsOptions](deno.connecttlsoptions)` ### Return Type > `Promise<[TlsConn](deno.tlsconn)>` deno GPUBindGroupLayoutEntry GPUBindGroupLayoutEntry ======================= ``` interface GPUBindGroupLayoutEntry {binding: number; buffer?: [GPUBufferBindingLayout](gpubufferbindinglayout); sampler?: [GPUSamplerBindingLayout](gpusamplerbindinglayout); storageTexture?: [GPUStorageTextureBindingLayout](gpustoragetexturebindinglayout); texture?: [GPUTextureBindingLayout](gputexturebindinglayout); visibility: [GPUShaderStageFlags](gpushaderstageflags); } ``` Properties ---------- > `binding: number` > `buffer?: [GPUBufferBindingLayout](gpubufferbindinglayout)` > `sampler?: [GPUSamplerBindingLayout](gpusamplerbindinglayout)` > `storageTexture?: [GPUStorageTextureBindingLayout](gpustoragetexturebindinglayout)` > `texture?: [GPUTextureBindingLayout](gputexturebindinglayout)` > `visibility: [GPUShaderStageFlags](gpushaderstageflags)` deno TextDecoderStream TextDecoderStream ================= ``` interface TextDecoderStream { readonly [[Symbol.toStringTag]]: string; readonly encoding: string; readonly fatal: boolean; readonly ignoreBOM: boolean; readonly readable: [ReadableStream](readablestream)<string>; readonly writable: [WritableStream](writablestream)<[BufferSource](buffersource)>; } ``` ``` var TextDecoderStream: {prototype: [TextDecoderStream](textdecoderstream); new (label?: string, options?: [TextDecoderOptions](textdecoderoptions)): [TextDecoderStream](textdecoderstream); }; ``` Properties ---------- > `readonly [[Symbol.toStringTag]]: string` > `readonly encoding: string` Returns encoding's name, lowercased. > `readonly fatal: boolean` Returns `true` if error mode is "fatal", and `false` otherwise. > `readonly ignoreBOM: boolean` Returns `true` if ignore BOM flag is set, and `false` otherwise. > `readonly readable: [ReadableStream](readablestream)<string>` > `readonly writable: [WritableStream](writablestream)<[BufferSource](buffersource)>` deno Deno.makeTempDirSync Deno.makeTempDirSync ==================== Synchronously creates a new temporary directory in the default directory for temporary files, unless `dir` is specified. Other optional options include prefixing and suffixing the directory name with `prefix` and `suffix` respectively. The full path to the newly created directory is returned. Multiple programs calling this function simultaneously will create different directories. It is the caller's responsibility to remove the directory when no longer needed. ``` const tempDirName0 = Deno.makeTempDirSync(); // e.g. /tmp/2894ea76 const tempDirName1 = Deno.makeTempDirSync({ prefix: 'my\_temp' }); // e.g. /tmp/my\_temp339c944d ``` Requires `allow-write` permission. ``` function makeTempDirSync(options?: [MakeTempOptions](deno.maketempoptions)): string; ``` > `makeTempDirSync(options?: [MakeTempOptions](deno.maketempoptions)): string` ### Parameters > `options?: [MakeTempOptions](deno.maketempoptions) optional` ### Return Type > `string` deno Request Request ======= This Fetch API interface represents a resource request. ``` class Request implements [Body](body) { constructor(input: [RequestInfo](requestinfo) | [URL](url), init?: [RequestInit](requestinit)); readonly body: [ReadableStream](readablestream)<Uint8Array> | null; readonly bodyUsed: boolean; readonly cache: [RequestCache](requestcache); readonly credentials: [RequestCredentials](requestcredentials); readonly destination: [RequestDestination](requestdestination); readonly headers: [Headers](headers); readonly integrity: string; readonly isHistoryNavigation: boolean; readonly isReloadNavigation: boolean; readonly keepalive: boolean; readonly method: string; readonly mode: [RequestMode](requestmode); readonly redirect: [RequestRedirect](requestredirect); readonly referrer: string; readonly referrerPolicy: [ReferrerPolicy](referrerpolicy); readonly signal: [AbortSignal](abortsignal); readonly url: string; arrayBuffer(): Promise<ArrayBuffer>; blob(): Promise<[Blob](blob)>; clone(): [Request](request); formData(): Promise<[FormData](formdata)>; json(): Promise<any>; text(): Promise<string>; } ``` Implements ---------- > `[Body](body)` Constructors ------------ > `new Request(input: [RequestInfo](requestinfo) | [URL](url), init?: [RequestInit](requestinit))` Properties ---------- > `body: [ReadableStream](readablestream)<Uint8Array> | null` A simple getter used to expose a `ReadableStream` of the body contents. > `bodyUsed: boolean` Stores a `Boolean` that declares whether the body has been used in a request yet. > `cache: [RequestCache](requestcache)` Returns the cache mode associated with request, which is a string indicating how the request will interact with the browser's cache when fetching. > `credentials: [RequestCredentials](requestcredentials)` Returns the credentials mode associated with request, which is a string indicating whether credentials will be sent with the request always, never, or only when sent to a same-origin URL. > `destination: [RequestDestination](requestdestination)` Returns the kind of resource requested by request, e.g., "document" or "script". > `headers: [Headers](headers)` Returns a Headers object consisting of the headers associated with request. Note that headers added in the network layer by the user agent will not be accounted for in this object, e.g., the "Host" header. > `integrity: string` Returns request's subresource integrity metadata, which is a cryptographic hash of the resource being fetched. Its value consists of multiple hashes separated by whitespace. [SRI] > `isHistoryNavigation: boolean` Returns a boolean indicating whether or not request is for a history navigation (a.k.a. back-forward navigation). > `isReloadNavigation: boolean` Returns a boolean indicating whether or not request is for a reload navigation. > `keepalive: boolean` Returns a boolean indicating whether or not request can outlive the global in which it was created. > `method: string` Returns request's HTTP method, which is "GET" by default. > `mode: [RequestMode](requestmode)` Returns the mode associated with request, which is a string indicating whether the request will use CORS, or will be restricted to same-origin URLs. > `redirect: [RequestRedirect](requestredirect)` Returns the redirect mode associated with request, which is a string indicating how redirects for the request will be handled during fetching. A request will follow redirects by default. > `referrer: string` Returns the referrer of request. Its value can be a same-origin URL if explicitly set in init, the empty string to indicate no referrer, and "about:client" when defaulting to the global's default. This is used during fetching to determine the value of the `Referer` header of the request being made. > `referrerPolicy: [ReferrerPolicy](referrerpolicy)` Returns the referrer policy associated with request. This is used during fetching to compute the value of the request's referrer. > `signal: [AbortSignal](abortsignal)` Returns the signal associated with request, which is an AbortSignal object indicating whether or not request has been aborted, and its abort event handler. > `url: string` Returns the URL of request as a string. Methods ------- > `arrayBuffer(): Promise<ArrayBuffer>` Takes a `Request` stream and reads it to completion. It returns a promise that resolves with an `ArrayBuffer`. > `blob(): Promise<[Blob](blob)>` Takes a `Request` stream and reads it to completion. It returns a promise that resolves with a `Blob`. > `clone(): [Request](request)` > `formData(): Promise<[FormData](formdata)>` Takes a `Request` stream and reads it to completion. It returns a promise that resolves with a `FormData` object. > `json(): Promise<any>` Takes a `Request` stream and reads it to completion. It returns a promise that resolves with the result of parsing the body text as JSON. > `text(): Promise<string>` Takes a `Request` stream and reads it to completion. It returns a promise that resolves with a `USVString` (text). deno Deno.createSync Deno.createSync =============== Creates a file if none exists or truncates an existing file and returns an instance of [`Deno.FsFile`](deno#FsFile). ``` const file = Deno.createSync("/foo/bar.txt"); ``` Requires `allow-read` and `allow-write` permissions. ``` function createSync(path: string | [URL](url)): [FsFile](deno.fsfile); ``` > `createSync(path: string | [URL](url)): [FsFile](deno.fsfile)` ### Parameters > `path: string | [URL](url)` ### Return Type > `[FsFile](deno.fsfile)` deno GPUColorDict GPUColorDict ============ ``` interface GPUColorDict {a: number; b: number; g: number; r: number; } ``` Properties ---------- > `a: number` > `b: number` > `g: number` > `r: number` jest Bypassing module mocks Bypassing module mocks ====================== Jest allows you to mock out whole modules in your tests, which can be useful for testing if your code is calling functions from that module correctly. However, sometimes you may want to use parts of a mocked module in your *test file*, in which case you want to access the original implementation, rather than a mocked version. Consider writing a test case for this `createUser` function: ``` import fetch from 'node-fetch'; export const createUser = async () => { const response = await fetch('http://website.com/users', {method: 'POST'}); const userId = await response.text(); return userId; }; ``` createUser.js Your test will want to mock the `fetch` function so that we can be sure that it gets called without actually making the network request. However, you'll also need to mock the return value of `fetch` with a `Response` (wrapped in a `Promise`), as our function uses it to grab the created user's ID. So you might initially try writing a test like this: ``` jest.mock('node-fetch'); import fetch, {Response} from 'node-fetch'; import {createUser} from './createUser'; test('createUser calls fetch with the right args and returns the user id', async () => { fetch.mockReturnValue(Promise.resolve(new Response('4'))); const userId = await createUser(); expect(fetch).toHaveBeenCalledTimes(1); expect(fetch).toHaveBeenCalledWith('http://website.com/users', { method: 'POST', }); expect(userId).toBe('4'); }); ``` However, if you ran that test you would find that the `createUser` function would fail, throwing the error: `TypeError: response.text is not a function`. This is because the `Response` class you've imported from `node-fetch` has been mocked (due to the `jest.mock` call at the top of the test file) so it no longer behaves the way it should. To get around problems like this, Jest provides the `jest.requireActual` helper. To make the above test work, make the following change to the imports in the test file: ``` // BEFORE jest.mock('node-fetch'); import fetch, {Response} from 'node-fetch'; ``` ``` // AFTER jest.mock('node-fetch'); import fetch from 'node-fetch'; const {Response} = jest.requireActual('node-fetch'); ``` This allows your test file to import the actual `Response` object from `node-fetch`, rather than a mocked version. This means the test will now pass correctly. jest Using with DynamoDB Using with DynamoDB =================== With the [Global Setup/Teardown](configuration#globalsetup-string) and [Async Test Environment](configuration#testenvironment-string) APIs, Jest can work smoothly with [DynamoDB](https://aws.amazon.com/dynamodb/). Use jest-dynamodb Preset ------------------------ [Jest DynamoDB](https://github.com/shelfio/jest-dynamodb) provides all required configuration to run your tests using DynamoDB. 1. First, install `@shelf/jest-dynamodb` * npm * Yarn ``` npm install --save-dev @shelf/jest-dynamodb ``` ``` yarn add --dev @shelf/jest-dynamodb ``` 2. Specify preset in your Jest configuration: ``` { "preset": "@shelf/jest-dynamodb" } ``` 3. Create `jest-dynamodb-config.js` and define DynamoDB tables See [Create Table API](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/DynamoDB.html#createTable-property) ``` module.exports = { tables: [ { TableName: `files`, KeySchema: [{AttributeName: 'id', KeyType: 'HASH'}], AttributeDefinitions: [{AttributeName: 'id', AttributeType: 'S'}], ProvisionedThroughput: {ReadCapacityUnits: 1, WriteCapacityUnits: 1}, }, // etc ], }; ``` 4. Configure DynamoDB client ``` const {DocumentClient} = require('aws-sdk/clients/dynamodb'); const isTest = process.env.JEST_WORKER_ID; const config = { convertEmptyValues: true, ...(isTest && { endpoint: 'localhost:8000', sslEnabled: false, region: 'local-env', }), }; const ddb = new DocumentClient(config); ``` 5. Write tests ``` it('should insert item into table', async () => { await ddb .put({TableName: 'files', Item: {id: '1', hello: 'world'}}) .promise(); const {Item} = await ddb.get({TableName: 'files', Key: {id: '1'}}).promise(); expect(Item).toEqual({ id: '1', hello: 'world', }); }); ``` There's no need to load any dependencies. See [documentation](https://github.com/shelfio/jest-dynamodb) for details.
programming_docs
jest Testing Asynchronous Code Testing Asynchronous Code ========================= It's common in JavaScript for code to run asynchronously. When you have code that runs asynchronously, Jest needs to know when the code it is testing has completed, before it can move on to another test. Jest has several ways to handle this. Promises -------- Return a promise from your test, and Jest will wait for that promise to resolve. If the promise is rejected, the test will fail. For example, let's say that `fetchData` returns a promise that is supposed to resolve to the string `'peanut butter'`. We could test it with: ``` test('the data is peanut butter', () => { return fetchData().then(data => { expect(data).toBe('peanut butter'); }); }); ``` Async/Await ----------- Alternatively, you can use `async` and `await` in your tests. To write an async test, use the `async` keyword in front of the function passed to `test`. For example, the same `fetchData` scenario can be tested with: ``` test('the data is peanut butter', async () => { const data = await fetchData(); expect(data).toBe('peanut butter'); }); test('the fetch fails with an error', async () => { expect.assertions(1); try { await fetchData(); } catch (e) { expect(e).toMatch('error'); } }); ``` You can combine `async` and `await` with `.resolves` or `.rejects`. ``` test('the data is peanut butter', async () => { await expect(fetchData()).resolves.toBe('peanut butter'); }); test('the fetch fails with an error', async () => { await expect(fetchData()).rejects.toMatch('error'); }); ``` In these cases, `async` and `await` are effectively syntactic sugar for the same logic as the promises example uses. caution Be sure to return (or `await`) the promise - if you omit the `return`/`await` statement, your test will complete before the promise returned from `fetchData` resolves or rejects. If you expect a promise to be rejected, use the `.catch` method. Make sure to add `expect.assertions` to verify that a certain number of assertions are called. Otherwise, a fulfilled promise would not fail the test. ``` test('the fetch fails with an error', () => { expect.assertions(1); return fetchData().catch(e => expect(e).toMatch('error')); }); ``` Callbacks --------- If you don't use promises, you can use callbacks. For example, let's say that `fetchData`, instead of returning a promise, expects a callback, i.e. fetches some data and calls `callback(null, data)` when it is complete. You want to test that this returned data is the string `'peanut butter'`. By default, Jest tests complete once they reach the end of their execution. That means this test will *not* work as intended: ``` // Don't do this! test('the data is peanut butter', () => { function callback(error, data) { if (error) { throw error; } expect(data).toBe('peanut butter'); } fetchData(callback); }); ``` The problem is that the test will complete as soon as `fetchData` completes, before ever calling the callback. There is an alternate form of `test` that fixes this. Instead of putting the test in a function with an empty argument, use a single argument called `done`. Jest will wait until the `done` callback is called before finishing the test. ``` test('the data is peanut butter', done => { function callback(error, data) { if (error) { done(error); return; } try { expect(data).toBe('peanut butter'); done(); } catch (error) { done(error); } } fetchData(callback); }); ``` If `done()` is never called, the test will fail (with timeout error), which is what you want to happen. If the `expect` statement fails, it throws an error and `done()` is not called. If we want to see in the test log why it failed, we have to wrap `expect` in a `try` block and pass the error in the `catch` block to `done`. Otherwise, we end up with an opaque timeout error that doesn't show what value was received by `expect(data)`. *Note: `done()` should not be mixed with Promises as this tends to lead to memory leaks in your tests.* `.resolves` / `.rejects` ------------------------- You can also use the `.resolves` matcher in your expect statement, and Jest will wait for that promise to resolve. If the promise is rejected, the test will automatically fail. ``` test('the data is peanut butter', () => { return expect(fetchData()).resolves.toBe('peanut butter'); }); ``` Be sure to return the assertion—if you omit this `return` statement, your test will complete before the promise returned from `fetchData` is resolved and then() has a chance to execute the callback. If you expect a promise to be rejected, use the `.rejects` matcher. It works analogically to the `.resolves` matcher. If the promise is fulfilled, the test will automatically fail. ``` test('the fetch fails with an error', () => { return expect(fetchData()).rejects.toMatch('error'); }); ``` None of these forms is particularly superior to the others, and you can mix and match them across a codebase or even in a single file. It just depends on which style you feel makes your tests simpler. jest Jest Documentation Jest Documentation ================== Install Jest using your favorite package manager: * npm * Yarn ``` npm install --save-dev jest ``` ``` yarn add --dev jest ``` Let's get started by writing a test for a hypothetical function that adds two numbers. First, create a `sum.js` file: ``` function sum(a, b) { return a + b; } module.exports = sum; ``` Then, create a file named `sum.test.js`. This will contain our actual test: ``` const sum = require('./sum'); test('adds 1 + 2 to equal 3', () => { expect(sum(1, 2)).toBe(3); }); ``` Add the following section to your `package.json`: ``` { "scripts": { "test": "jest" } } ``` Finally, run `yarn test` or `npm test` and Jest will print this message: ``` PASS ./sum.test.js ✓ adds 1 + 2 to equal 3 (5ms) ``` **You just successfully wrote your first test using Jest!** This test used `expect` and `toBe` to test that two values were exactly identical. To learn about the other things that Jest can test, see [Using Matchers](using-matchers). Running from command line ------------------------- You can run Jest directly from the CLI (if it's globally available in your `PATH`, e.g. by `yarn global add jest` or `npm install jest --global`) with a variety of useful options. Here's how to run Jest on files matching `my-test`, using `config.json` as a configuration file and display a native OS notification after the run: ``` jest my-test --notify --config=config.json ``` If you'd like to learn more about running `jest` through the command line, take a look at the [Jest CLI Options](cli) page. Additional Configuration ------------------------ ### Generate a basic configuration file Based on your project, Jest will ask you a few questions and will create a basic configuration file with a short description for each option: ``` jest --init ``` ### Using Babel To use [Babel](https://babeljs.io/), install required dependencies: * npm * Yarn ``` npm install --save-dev babel-jest @babel/core @babel/preset-env ``` ``` yarn add --dev babel-jest @babel/core @babel/preset-env ``` Configure Babel to target your current version of Node by creating a `babel.config.js` file in the root of your project: ``` module.exports = { presets: [['@babel/preset-env', {targets: {node: 'current'}}]], }; ``` babel.config.js *The ideal configuration for Babel will depend on your project.* See [Babel's docs](https://babeljs.io/docs/en/) for more details. **Making your Babel config jest-aware** Jest will set `process.env.NODE_ENV` to `'test'` if it's not set to something else. You can use that in your configuration to conditionally setup only the compilation needed for Jest, e.g. ``` module.exports = api => { const isTest = api.env('test'); // You can use isTest to determine what presets and plugins to use. return { // ... }; }; ``` babel.config.js > Note: `babel-jest` is automatically installed when installing Jest and will automatically transform files if a babel configuration exists in your project. To avoid this behavior, you can explicitly reset the `transform` configuration option: > > ``` module.exports = { transform: {}, }; ``` jest.config.js ### Using webpack Jest can be used in projects that use [webpack](https://webpack.js.org/) to manage assets, styles, and compilation. webpack does offer some unique challenges over other tools. Refer to the [webpack guide](webpack) to get started. ### Using parcel Jest can be used in projects that use [parcel-bundler](https://parceljs.org/) to manage assets, styles, and compilation similar to webpack. Parcel requires zero configuration. Refer to the official [docs](https://parceljs.org/docs/) to get started. ### Using TypeScript #### Via `babel` Jest supports TypeScript, via Babel. First, make sure you followed the instructions on [using Babel](#using-babel) above. Next, install the `@babel/preset-typescript`: * npm * Yarn ``` npm install --save-dev @babel/preset-typescript ``` ``` yarn add --dev @babel/preset-typescript ``` Then add `@babel/preset-typescript` to the list of presets in your `babel.config.js`. ``` module.exports = { presets: [ ['@babel/preset-env', {targets: {node: 'current'}}], '@babel/preset-typescript', ], }; ``` babel.config.js However, there are some [caveats](https://babeljs.io/docs/en/babel-plugin-transform-typescript#caveats) to using TypeScript with Babel. Because TypeScript support in Babel is purely transpilation, Jest will not type-check your tests as they are run. If you want that, you can use [ts-jest](https://github.com/kulshekhar/ts-jest) instead, or just run the TypeScript compiler [tsc](https://www.typescriptlang.org/docs/handbook/compiler-options.html) separately (or as part of your build process). #### Via `ts-jest` [ts-jest](https://github.com/kulshekhar/ts-jest) is a TypeScript preprocessor with source map support for Jest that lets you use Jest to test projects written in TypeScript. * npm * Yarn ``` npm install --save-dev ts-jest ``` ``` yarn add --dev ts-jest ``` #### Type definitions There are two ways have [Jest global APIs](api) typed for test files written in TypeScript. You can use type definitions which ships with Jest and will update each time you update Jest. Simply import the APIs from `@jest/globals` package: ``` import {describe, expect, test} from '@jest/globals'; import {sum} from './sum'; describe('sum module', () => { test('adds 1 + 2 to equal 3', () => { expect(sum(1, 2)).toBe(3); }); }); ``` sum.test.ts tip See the additional usage documentation of [`describe.each`/`test.each`](api#typescript-usage) and [`mock functions`](mock-function-api#typescript-usage). Or you may choose to install the [`@types/jest`](https://npmjs.com/package/@types/jest) package. It provides types for Jest globals without a need to import them. * npm * Yarn ``` npm install --save-dev @types/jest ``` ``` yarn add --dev @types/jest ``` Note that `@types/jest` is a third party library maintained at [DefinitelyTyped](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/jest), hence the latest Jest features or versions may not be covered yet. Try to match versions of Jest and `@types/jest` as closely as possible. For example, if you are using Jest `27.4.0` then installing `27.4.x` of `@types/jest` is ideal. jest Mock Functions Mock Functions ============== Mock functions are also known as "spies", because they let you spy on the behavior of a function that is called indirectly by some other code, rather than only testing the output. You can create a mock function with `jest.fn()`. If no implementation is given, the mock function will return `undefined` when invoked. info The TypeScript examples from this page will only work as documented if you import `jest` from `'@jest/globals'`: ``` import {jest} from '@jest/globals'; ``` Methods ------- * [Reference](#reference) + [`mockFn.getMockName()`](#mockfngetmockname) + [`mockFn.mock.calls`](#mockfnmockcalls) + [`mockFn.mock.results`](#mockfnmockresults) + [`mockFn.mock.instances`](#mockfnmockinstances) + [`mockFn.mock.contexts`](#mockfnmockcontexts) + [`mockFn.mock.lastCall`](#mockfnmocklastcall) + [`mockFn.mockClear()`](#mockfnmockclear) + [`mockFn.mockReset()`](#mockfnmockreset) + [`mockFn.mockRestore()`](#mockfnmockrestore) + [`mockFn.mockImplementation(fn)`](#mockfnmockimplementationfn) + [`mockFn.mockImplementationOnce(fn)`](#mockfnmockimplementationoncefn) + [`mockFn.mockName(name)`](#mockfnmocknamename) + [`mockFn.mockReturnThis()`](#mockfnmockreturnthis) + [`mockFn.mockReturnValue(value)`](#mockfnmockreturnvaluevalue) + [`mockFn.mockReturnValueOnce(value)`](#mockfnmockreturnvalueoncevalue) + [`mockFn.mockResolvedValue(value)`](#mockfnmockresolvedvaluevalue) + [`mockFn.mockResolvedValueOnce(value)`](#mockfnmockresolvedvalueoncevalue) + [`mockFn.mockRejectedValue(value)`](#mockfnmockrejectedvaluevalue) + [`mockFn.mockRejectedValueOnce(value)`](#mockfnmockrejectedvalueoncevalue) * [TypeScript Usage](#typescript-usage) + [`jest.fn(implementation?)`](#jestfnimplementation) + [`jest.Mocked<Source>`](#jestmockedsource) + [`jest.mocked(source, options?)`](#jestmockedsource-options) Reference --------- ### `mockFn.getMockName()` Returns the mock name string set by calling `mockFn.mockName(value)`. ### `mockFn.mock.calls` An array containing the call arguments of all calls that have been made to this mock function. Each item in the array is an array of arguments that were passed during the call. For example: A mock function `f` that has been called twice, with the arguments `f('arg1', 'arg2')`, and then with the arguments `f('arg3', 'arg4')`, would have a `mock.calls` array that looks like this: ``` [ ['arg1', 'arg2'], ['arg3', 'arg4'], ]; ``` ### `mockFn.mock.results` An array containing the results of all calls that have been made to this mock function. Each entry in this array is an object containing a `type` property, and a `value` property. `type` will be one of the following: * `'return'` - Indicates that the call completed by returning normally. * `'throw'` - Indicates that the call completed by throwing a value. * `'incomplete'` - Indicates that the call has not yet completed. This occurs if you test the result from within the mock function itself, or from within a function that was called by the mock. The `value` property contains the value that was thrown or returned. `value` is undefined when `type === 'incomplete'`. For example: A mock function `f` that has been called three times, returning `'result1'`, throwing an error, and then returning `'result2'`, would have a `mock.results` array that looks like this: ``` [ { type: 'return', value: 'result1', }, { type: 'throw', value: { /* Error instance */ }, }, { type: 'return', value: 'result2', }, ]; ``` ### `mockFn.mock.instances` An array that contains all the object instances that have been instantiated from this mock function using `new`. For example: A mock function that has been instantiated twice would have the following `mock.instances` array: ``` const mockFn = jest.fn(); const a = new mockFn(); const b = new mockFn(); mockFn.mock.instances[0] === a; // true mockFn.mock.instances[1] === b; // true ``` ### `mockFn.mock.contexts` An array that contains the contexts for all calls of the mock function. A context is the `this` value that a function receives when called. The context can be set using `Function.prototype.bind`, `Function.prototype.call` or `Function.prototype.apply`. For example: ``` const mockFn = jest.fn(); const boundMockFn = mockFn.bind(thisContext0); boundMockFn('a', 'b'); mockFn.call(thisContext1, 'a', 'b'); mockFn.apply(thisContext2, ['a', 'b']); mockFn.mock.contexts[0] === thisContext0; // true mockFn.mock.contexts[1] === thisContext1; // true mockFn.mock.contexts[2] === thisContext2; // true ``` ### `mockFn.mock.lastCall` An array containing the call arguments of the last call that was made to this mock function. If the function was not called, it will return `undefined`. For example: A mock function `f` that has been called twice, with the arguments `f('arg1', 'arg2')`, and then with the arguments `f('arg3', 'arg4')`, would have a `mock.lastCall` array that looks like this: ``` ['arg3', 'arg4']; ``` ### `mockFn.mockClear()` Clears all information stored in the [`mockFn.mock.calls`](#mockfnmockcalls), [`mockFn.mock.instances`](#mockfnmockinstances), [`mockFn.mock.contexts`](#mockfnmockcontexts) and [`mockFn.mock.results`](#mockfnmockresults) arrays. Often this is useful when you want to clean up a mocks usage data between two assertions. Beware that `mockFn.mockClear()` will replace `mockFn.mock`, not just reset the values of its properties! You should, therefore, avoid assigning `mockFn.mock` to other variables, temporary or not, to make sure you don't access stale data. The [`clearMocks`](configuration#clearmocks-boolean) configuration option is available to clear mocks automatically before each tests. ### `mockFn.mockReset()` Does everything that [`mockFn.mockClear()`](#mockfnmockclear) does, and also removes any mocked return values or implementations. This is useful when you want to completely reset a *mock* back to its initial state. (Note that resetting a *spy* will result in a function with no return value). The [`mockReset`](configuration#resetmocks-boolean) configuration option is available to reset mocks automatically before each test. ### `mockFn.mockRestore()` Does everything that [`mockFn.mockReset()`](#mockfnmockreset) does, and also restores the original (non-mocked) implementation. This is useful when you want to mock functions in certain test cases and restore the original implementation in others. Beware that `mockFn.mockRestore()` only works when the mock was created with `jest.spyOn()`. Thus you have to take care of restoration yourself when manually assigning `jest.fn()`. The [`restoreMocks`](configuration#restoremocks-boolean) configuration option is available to restore mocks automatically before each test. ### `mockFn.mockImplementation(fn)` Accepts a function that should be used as the implementation of the mock. The mock itself will still record all calls that go into and instances that come from itself – the only difference is that the implementation will also be executed when the mock is called. tip `jest.fn(implementation)` is a shorthand for `jest.fn().mockImplementation(implementation)`. * JavaScript * TypeScript ``` const mockFn = jest.fn(scalar => 42 + scalar); mockFn(0); // 42 mockFn(1); // 43 mockFn.mockImplementation(scalar => 36 + scalar); mockFn(2); // 38 mockFn(3); // 39 ``` ``` const mockFn = jest.fn((scalar: number) => 42 + scalar); mockFn(0); // 42 mockFn(1); // 43 mockFn.mockImplementation(scalar => 36 + scalar); mockFn(2); // 38 mockFn(3); // 39 ``` `.mockImplementation()` can also be used to mock class constructors: * JavaScript * TypeScript ``` module.exports = class SomeClass { method(a, b) {} }; ``` SomeClass.js ``` const SomeClass = require('./SomeClass'); jest.mock('./SomeClass'); // this happens automatically with automocking const mockMethod = jest.fn(); SomeClass.mockImplementation(() => { return { method: mockMethod, }; }); const some = new SomeClass(); some.method('a', 'b'); console.log('Calls to method: ', mockMethod.mock.calls); ``` SomeClass.test.js ``` export class SomeClass { method(a: string, b: string): void {} } ``` SomeClass.ts ``` import {SomeClass} from './SomeClass'; jest.mock('./SomeClass'); // this happens automatically with automocking const mockMethod = jest.fn<(a: string, b: string) => void>(); SomeClass.mockImplementation(() => { return { method: mockMethod, }; }); const some = new SomeClass(); some.method('a', 'b'); console.log('Calls to method: ', mockMethod.mock.calls); ``` SomeClass.test.ts ### `mockFn.mockImplementationOnce(fn)` Accepts a function that will be used as an implementation of the mock for one call to the mocked function. Can be chained so that multiple function calls produce different results. * JavaScript * TypeScript ``` const mockFn = jest .fn() .mockImplementationOnce(cb => cb(null, true)) .mockImplementationOnce(cb => cb(null, false)); mockFn((err, val) => console.log(val)); // true mockFn((err, val) => console.log(val)); // false ``` ``` const mockFn = jest .fn<(cb: (a: null, b: boolean) => void) => void>() .mockImplementationOnce(cb => cb(null, true)) .mockImplementationOnce(cb => cb(null, false)); mockFn((err, val) => console.log(val)); // true mockFn((err, val) => console.log(val)); // false ``` When the mocked function runs out of implementations defined with `.mockImplementationOnce()`, it will execute the default implementation set with `jest.fn(() => defaultValue)` or `.mockImplementation(() => defaultValue)` if they were called: ``` const mockFn = jest .fn(() => 'default') .mockImplementationOnce(() => 'first call') .mockImplementationOnce(() => 'second call'); mockFn(); // 'first call' mockFn(); // 'second call' mockFn(); // 'default' mockFn(); // 'default' ``` ### `mockFn.mockName(name)` Accepts a string to use in test result output in place of `'jest.fn()'` to indicate which mock function is being referenced. For example: ``` const mockFn = jest.fn().mockName('mockedFunction'); // mockFn(); expect(mockFn).toHaveBeenCalled(); ``` Will result in this error: ``` expect(mockedFunction).toHaveBeenCalled() Expected mock function "mockedFunction" to have been called, but it was not called. ``` ### `mockFn.mockReturnThis()` Syntactic sugar function for: ``` jest.fn(function () { return this; }); ``` ### `mockFn.mockReturnValue(value)` Accepts a value that will be returned whenever the mock function is called. * JavaScript * TypeScript ``` const mock = jest.fn(); mock.mockReturnValue(42); mock(); // 42 mock.mockReturnValue(43); mock(); // 43 ``` ``` const mock = jest.fn<() => number>(); mock.mockReturnValue(42); mock(); // 42 mock.mockReturnValue(43); mock(); // 43 ``` ### `mockFn.mockReturnValueOnce(value)` Accepts a value that will be returned for one call to the mock function. Can be chained so that successive calls to the mock function return different values. When there are no more `mockReturnValueOnce` values to use, calls will return a value specified by `mockReturnValue`. * JavaScript * TypeScript ``` const mockFn = jest .fn() .mockReturnValue('default') .mockReturnValueOnce('first call') .mockReturnValueOnce('second call'); mockFn(); // 'first call' mockFn(); // 'second call' mockFn(); // 'default' mockFn(); // 'default' ``` ``` const mockFn = jest .fn<() => string>() .mockReturnValue('default') .mockReturnValueOnce('first call') .mockReturnValueOnce('second call'); mockFn(); // 'first call' mockFn(); // 'second call' mockFn(); // 'default' mockFn(); // 'default' ``` ### `mockFn.mockResolvedValue(value)` Syntactic sugar function for: ``` jest.fn().mockImplementation(() => Promise.resolve(value)); ``` Useful to mock async functions in async tests: * JavaScript * TypeScript ``` test('async test', async () => { const asyncMock = jest.fn().mockResolvedValue(43); await asyncMock(); // 43 }); ``` ``` test('async test', async () => { const asyncMock = jest.fn<() => Promise<number>>().mockResolvedValue(43); await asyncMock(); // 43 }); ``` ### `mockFn.mockResolvedValueOnce(value)` Syntactic sugar function for: ``` jest.fn().mockImplementationOnce(() => Promise.resolve(value)); ``` Useful to resolve different values over multiple async calls: * JavaScript * TypeScript ``` test('async test', async () => { const asyncMock = jest .fn() .mockResolvedValue('default') .mockResolvedValueOnce('first call') .mockResolvedValueOnce('second call'); await asyncMock(); // 'first call' await asyncMock(); // 'second call' await asyncMock(); // 'default' await asyncMock(); // 'default' }); ``` ``` test('async test', async () => { const asyncMock = jest .fn<() => Promise<string>>() .mockResolvedValue('default') .mockResolvedValueOnce('first call') .mockResolvedValueOnce('second call'); await asyncMock(); // 'first call' await asyncMock(); // 'second call' await asyncMock(); // 'default' await asyncMock(); // 'default' }); ``` ### `mockFn.mockRejectedValue(value)` Syntactic sugar function for: ``` jest.fn().mockImplementation(() => Promise.reject(value)); ``` Useful to create async mock functions that will always reject: * JavaScript * TypeScript ``` test('async test', async () => { const asyncMock = jest .fn() .mockRejectedValue(new Error('Async error message')); await asyncMock(); // throws 'Async error message' }); ``` ``` test('async test', async () => { const asyncMock = jest .fn<() => Promise<never>>() .mockRejectedValue(new Error('Async error message')); await asyncMock(); // throws 'Async error message' }); ``` ### `mockFn.mockRejectedValueOnce(value)` Syntactic sugar function for: ``` jest.fn().mockImplementationOnce(() => Promise.reject(value)); ``` Useful together with `.mockResolvedValueOnce()` or to reject with different exceptions over multiple async calls: * JavaScript * TypeScript ``` test('async test', async () => { const asyncMock = jest .fn() .mockResolvedValueOnce('first call') .mockRejectedValueOnce(new Error('Async error message')); await asyncMock(); // 'first call' await asyncMock(); // throws 'Async error message' }); ``` ``` test('async test', async () => { const asyncMock = jest .fn<() => Promise<string>>() .mockResolvedValueOnce('first call') .mockRejectedValueOnce(new Error('Async error message')); await asyncMock(); // 'first call' await asyncMock(); // throws 'Async error message' }); ``` TypeScript Usage ---------------- tip Please consult the [Getting Started](getting-started#using-typescript) guide for details on how to setup Jest with TypeScript. ### `jest.fn(implementation?)` Correct mock typings will be inferred if implementation is passed to [`jest.fn()`](jest-object#jestfnimplementation). There are many use cases where the implementation is omitted. To ensure type safety you may pass a generic type argument (also see the examples above for more reference): ``` import {expect, jest, test} from '@jest/globals'; import type add from './add'; import calculate from './calc'; test('calculate calls add', () => { // Create a new mock that can be used in place of `add`. const mockAdd = jest.fn<typeof add>(); // `.mockImplementation()` now can infer that `a` and `b` are `number` // and that the returned value is a `number`. mockAdd.mockImplementation((a, b) => { // Yes, this mock is still adding two numbers but imagine this // was a complex function we are mocking. return a + b; }); // `mockAdd` is properly typed and therefore accepted by anything // requiring `add`. calculate(mockAdd, 1, 2); expect(mockAdd).toBeCalledTimes(1); expect(mockAdd).toBeCalledWith(1, 2); }); ``` ### `jest.Mocked<Source>` The `jest.Mocked<Source>` utility type returns the `Source` type wrapped with type definitions of Jest mock function. ``` import {expect, jest, test} from '@jest/globals'; import type {fetch} from 'node-fetch'; jest.mock('node-fetch'); let mockedFetch: jest.Mocked<typeof fetch>; afterEach(() => { mockedFetch.mockClear(); }); test('makes correct call', () => { mockedFetch = getMockedFetch(); // ... }); test('returns correct data', () => { mockedFetch = getMockedFetch(); // ... }); ``` Types of classes, functions or objects can be passed as type argument to `jest.Mocked<Source>`. If you prefer to constrain the input type, use: `jest.MockedClass<Source>`, `jest.MockedFunction<Source>` or `jest.MockedObject<Source>`. ### `jest.mocked(source, options?)` The `mocked()` helper method wraps types of the `source` object and its deep nested members with type definitions of Jest mock function. You can pass `{shallow: true}` as the `options` argument to disable the deeply mocked behavior. Returns the `source` object. ``` export const song = { one: { more: { time: (t: number) => { return t; }, }, }, }; ``` song.ts ``` import {expect, jest, test} from '@jest/globals'; import {song} from './song'; jest.mock('./song'); jest.spyOn(console, 'log'); const mockedSong = jest.mocked(song); // or through `jest.Mocked<Source>` // const mockedSong = song as jest.Mocked<typeof song>; test('deep method is typed correctly', () => { mockedSong.one.more.time.mockReturnValue(12); expect(mockedSong.one.more.time(10)).toBe(12); expect(mockedSong.one.more.time.mock.calls).toHaveLength(1); }); test('direct usage', () => { jest.mocked(console.log).mockImplementation(() => { return; }); console.log('one more time'); expect(jest.mocked(console.log).mock.calls).toHaveLength(1); }); ``` song.test.ts
programming_docs
jest Jest Platform Jest Platform ============= You can cherry pick specific features of Jest and use them as standalone packages. Here's a list of the available packages: jest-changed-files ------------------ Tool for identifying modified files in a git/hg repository. Exports two functions: * `getChangedFilesForRoots` returns a promise that resolves to an object with the changed files and repos. * `findRepos` returns a promise that resolves to a set of repositories contained in the specified path. ### Example ``` const {getChangedFilesForRoots} = require('jest-changed-files'); // print the set of modified files since last commit in the current repo getChangedFilesForRoots(['./'], { lastCommit: true, }).then(result => console.log(result.changedFiles)); ``` You can read more about `jest-changed-files` in the [readme file](https://github.com/facebook/jest/blob/main/packages/jest-changed-files/README.md). jest-diff --------- Tool for visualizing changes in data. Exports a function that compares two values of any type and returns a "pretty-printed" string illustrating the difference between the two arguments. ### Example ``` const {diff} = require('jest-diff'); const a = {a: {b: {c: 5}}}; const b = {a: {b: {c: 6}}}; const result = diff(a, b); // print diff console.log(result); ``` jest-docblock ------------- Tool for extracting and parsing the comments at the top of a JavaScript file. Exports various functions to manipulate the data inside the comment block. ### Example ``` const {parseWithComments} = require('jest-docblock'); const code = ` /** * This is a sample * * @flow */ console.log('Hello World!'); `; const parsed = parseWithComments(code); // prints an object with two attributes: comments and pragmas. console.log(parsed); ``` You can read more about `jest-docblock` in the [readme file](https://github.com/facebook/jest/blob/main/packages/jest-docblock/README.md). jest-get-type ------------- Module that identifies the primitive type of any JavaScript value. Exports a function that returns a string with the type of the value passed as argument. ### Example ``` const {getType} = require('jest-get-type'); const array = [1, 2, 3]; const nullValue = null; const undefinedValue = undefined; // prints 'array' console.log(getType(array)); // prints 'null' console.log(getType(nullValue)); // prints 'undefined' console.log(getType(undefinedValue)); ``` jest-validate ------------- Tool for validating configurations submitted by users. Exports a function that takes two arguments: the user's configuration and an object containing an example configuration and other options. The return value is an object with two attributes: * `hasDeprecationWarnings`, a boolean indicating whether the submitted configuration has deprecation warnings, * `isValid`, a boolean indicating whether the configuration is correct or not. ### Example ``` const {validate} = require('jest-validate'); const configByUser = { transform: '<rootDir>/node_modules/my-custom-transform', }; const result = validate(configByUser, { comment: ' Documentation: http://custom-docs.com', exampleConfig: {transform: '<rootDir>/node_modules/babel-jest'}, }); console.log(result); ``` You can read more about `jest-validate` in the [readme file](https://github.com/facebook/jest/blob/main/packages/jest-validate/README.md). jest-worker ----------- Module used for parallelization of tasks. Exports a class `JestWorker` that takes the path of Node.js module and lets you call the module's exported methods as if they were class methods, returning a promise that resolves when the specified method finishes its execution in a forked process. ### Example ``` module.exports = { myHeavyTask: args => { // long running CPU intensive task. }, }; ``` heavy-task.js ``` async function main() { const worker = new Worker(require.resolve('./heavy-task.js')); // run 2 tasks in parallel with different arguments const results = await Promise.all([ worker.myHeavyTask({foo: 'bar'}), worker.myHeavyTask({bar: 'foo'}), ]); console.log(results); } main(); ``` main.js You can read more about `jest-worker` in the [readme file](https://github.com/facebook/jest/blob/main/packages/jest-worker/README.md). pretty-format ------------- Exports a function that converts any JavaScript value into a human-readable string. Supports all built-in JavaScript types out of the box and allows extension for application-specific types via user-defined plugins. ### Example ``` const {format: prettyFormat} = require('pretty-format'); const val = {object: {}}; val.circularReference = val; val[Symbol('foo')] = 'foo'; val.map = new Map([['prop', 'value']]); val.array = [-0, Infinity, NaN]; console.log(prettyFormat(val)); ``` You can read more about `pretty-format` in the [readme file](https://github.com/facebook/jest/blob/main/packages/pretty-format/README.md). jest Setup and Teardown Setup and Teardown ================== Often while writing tests you have some setup work that needs to happen before tests run, and you have some finishing work that needs to happen after tests run. Jest provides helper functions to handle this. Repeating Setup --------------- If you have some work you need to do repeatedly for many tests, you can use `beforeEach` and `afterEach` hooks. For example, let's say that several tests interact with a database of cities. You have a method `initializeCityDatabase()` that must be called before each of these tests, and a method `clearCityDatabase()` that must be called after each of these tests. You can do this with: ``` beforeEach(() => { initializeCityDatabase(); }); afterEach(() => { clearCityDatabase(); }); test('city database has Vienna', () => { expect(isCity('Vienna')).toBeTruthy(); }); test('city database has San Juan', () => { expect(isCity('San Juan')).toBeTruthy(); }); ``` `beforeEach` and `afterEach` can handle asynchronous code in the same ways that [tests can handle asynchronous code](asynchronous) - they can either take a `done` parameter or return a promise. For example, if `initializeCityDatabase()` returned a promise that resolved when the database was initialized, we would want to return that promise: ``` beforeEach(() => { return initializeCityDatabase(); }); ``` One-Time Setup -------------- In some cases, you only need to do setup once, at the beginning of a file. This can be especially bothersome when the setup is asynchronous, so you can't do it inline. Jest provides `beforeAll` and `afterAll` hooks to handle this situation. For example, if both `initializeCityDatabase()` and `clearCityDatabase()` returned promises, and the city database could be reused between tests, we could change our test code to: ``` beforeAll(() => { return initializeCityDatabase(); }); afterAll(() => { return clearCityDatabase(); }); test('city database has Vienna', () => { expect(isCity('Vienna')).toBeTruthy(); }); test('city database has San Juan', () => { expect(isCity('San Juan')).toBeTruthy(); }); ``` Scoping ------- By default, the `beforeAll` and `afterAll` blocks apply to every test in a file. You can also group tests together using a `describe` block. When they are inside a `describe` block, the `beforeAll` and `afterAll` blocks only apply to the tests within that `describe` block. For example, let's say we had not just a city database, but also a food database. We could do different setup for different tests: ``` // Applies to all tests in this file beforeEach(() => { return initializeCityDatabase(); }); test('city database has Vienna', () => { expect(isCity('Vienna')).toBeTruthy(); }); test('city database has San Juan', () => { expect(isCity('San Juan')).toBeTruthy(); }); describe('matching cities to foods', () => { // Applies only to tests in this describe block beforeEach(() => { return initializeFoodDatabase(); }); test('Vienna <3 veal', () => { expect(isValidCityFoodPair('Vienna', 'Wiener Schnitzel')).toBe(true); }); test('San Juan <3 plantains', () => { expect(isValidCityFoodPair('San Juan', 'Mofongo')).toBe(true); }); }); ``` Note that the top-level `beforeEach` is executed before the `beforeEach` inside the `describe` block. It may help to illustrate the order of execution of all hooks. ``` beforeAll(() => console.log('1 - beforeAll')); afterAll(() => console.log('1 - afterAll')); beforeEach(() => console.log('1 - beforeEach')); afterEach(() => console.log('1 - afterEach')); test('', () => console.log('1 - test')); describe('Scoped / Nested block', () => { beforeAll(() => console.log('2 - beforeAll')); afterAll(() => console.log('2 - afterAll')); beforeEach(() => console.log('2 - beforeEach')); afterEach(() => console.log('2 - afterEach')); test('', () => console.log('2 - test')); }); // 1 - beforeAll // 1 - beforeEach // 1 - test // 1 - afterEach // 2 - beforeAll // 1 - beforeEach // 2 - beforeEach // 2 - test // 2 - afterEach // 1 - afterEach // 2 - afterAll // 1 - afterAll ``` Order of Execution ------------------ Jest executes all describe handlers in a test file *before* it executes any of the actual tests. This is another reason to do setup and teardown inside `before*` and `after*` handlers rather than inside the `describe` blocks. Once the `describe` blocks are complete, by default Jest runs all the tests serially in the order they were encountered in the collection phase, waiting for each to finish and be tidied up before moving on. Consider the following illustrative test file and output: ``` describe('describe outer', () => { console.log('describe outer-a'); describe('describe inner 1', () => { console.log('describe inner 1'); test('test 1', () => console.log('test 1')); }); console.log('describe outer-b'); test('test 2', () => console.log('test 2')); describe('describe inner 2', () => { console.log('describe inner 2'); test('test 3', () => console.log('test 3')); }); console.log('describe outer-c'); }); // describe outer-a // describe inner 1 // describe outer-b // describe inner 2 // describe outer-c // test 1 // test 2 // test 3 ``` Just like the `describe` and `test` blocks Jest calls the `before*` and `after*` hooks in the order of declaration. Note that the `after*` hooks of the enclosing scope are called first. For example, here is how you can set up and tear down resources which depend on each other: ``` beforeEach(() => console.log('connection setup')); beforeEach(() => console.log('database setup')); afterEach(() => console.log('database teardown')); afterEach(() => console.log('connection teardown')); test('test 1', () => console.log('test 1')); describe('extra', () => { beforeEach(() => console.log('extra database setup')); afterEach(() => console.log('extra database teardown')); test('test 2', () => console.log('test 2')); }); // connection setup // database setup // test 1 // database teardown // connection teardown // connection setup // database setup // extra database setup // test 2 // extra database teardown // database teardown // connection teardown ``` note If you are using `jasmine2` test runner, take into account that it calls the `after*` hooks in the reverse order of declaration. To have identical output, the above example should be altered like this: ``` beforeEach(() => console.log('connection setup')); + afterEach(() => console.log('connection teardown')); beforeEach(() => console.log('database setup')); + afterEach(() => console.log('database teardown')); - afterEach(() => console.log('database teardown')); - afterEach(() => console.log('connection teardown')); // ... ``` General Advice -------------- If a test is failing, one of the first things to check should be whether the test is failing when it's the only test that runs. To run only one test with Jest, temporarily change that `test` command to a `test.only`: ``` test.only('this will be the only test that runs', () => { expect(true).toBe(false); }); test('this test will not run', () => { expect('A').toBe('A'); }); ``` If you have a test that often fails when it's run as part of a larger suite, but doesn't fail when you run it alone, it's a good bet that something from a different test is interfering with this one. You can often fix this by clearing some shared state with `beforeEach`. If you're not sure whether some shared state is being modified, you can also try a `beforeEach` that logs data. jest Manual Mocks Manual Mocks ============ Manual mocks are used to stub out functionality with mock data. For example, instead of accessing a remote resource like a website or a database, you might want to create a manual mock that allows you to use fake data. This ensures your tests will be fast and not flaky. Mocking user modules -------------------- Manual mocks are defined by writing a module in a `__mocks__/` subdirectory immediately adjacent to the module. For example, to mock a module called `user` in the `models` directory, create a file called `user.js` and put it in the `models/__mocks__` directory. Note that the `__mocks__` folder is case-sensitive, so naming the directory `__MOCKS__` will break on some systems. > When we require that module in our tests (meaning we want to use the manual mock instead of the real implementation), explicitly calling `jest.mock('./moduleName')` is **required**. > > Mocking Node modules -------------------- If the module you are mocking is a Node module (e.g.: `lodash`), the mock should be placed in the `__mocks__` directory adjacent to `node_modules` (unless you configured [`roots`](configuration#roots-arraystring) to point to a folder other than the project root) and will be **automatically** mocked. There's no need to explicitly call `jest.mock('module_name')`. Scoped modules (also known as [scoped packages](https://docs.npmjs.com/cli/v6/using-npm/scope)) can be mocked by creating a file in a directory structure that matches the name of the scoped module. For example, to mock a scoped module called `@scope/project-name`, create a file at `__mocks__/@scope/project-name.js`, creating the `@scope/` directory accordingly. > Warning: If we want to mock Node's core modules (e.g.: `fs` or `path`), then explicitly calling e.g. `jest.mock('path')` is **required**, because core Node modules are not mocked by default. > > Examples -------- ``` . ├── config ├── __mocks__ │   └── fs.js ├── models │   ├── __mocks__ │   │   └── user.js │   └── user.js ├── node_modules └── views ``` When a manual mock exists for a given module, Jest's module system will use that module when explicitly calling `jest.mock('moduleName')`. However, when `automock` is set to `true`, the manual mock implementation will be used instead of the automatically created mock, even if `jest.mock('moduleName')` is not called. To opt out of this behavior you will need to explicitly call `jest.unmock('moduleName')` in tests that should use the actual module implementation. > Note: In order to mock properly, Jest needs `jest.mock('moduleName')` to be in the same scope as the `require/import` statement. > > Here's a contrived example where we have a module that provides a summary of all the files in a given directory. In this case, we use the core (built in) `fs` module. ``` 'use strict'; const fs = require('fs'); function summarizeFilesInDirectorySync(directory) { return fs.readdirSync(directory).map(fileName => ({ directory, fileName, })); } exports.summarizeFilesInDirectorySync = summarizeFilesInDirectorySync; ``` FileSummarizer.js Since we'd like our tests to avoid actually hitting the disk (that's pretty slow and fragile), we create a manual mock for the `fs` module by extending an automatic mock. Our manual mock will implement custom versions of the `fs` APIs that we can build on for our tests: ``` 'use strict'; const path = require('path'); const fs = jest.createMockFromModule('fs'); // This is a custom function that our tests can use during setup to specify // what the files on the "mock" filesystem should look like when any of the // `fs` APIs are used. let mockFiles = Object.create(null); function __setMockFiles(newMockFiles) { mockFiles = Object.create(null); for (const file in newMockFiles) { const dir = path.dirname(file); if (!mockFiles[dir]) { mockFiles[dir] = []; } mockFiles[dir].push(path.basename(file)); } } // A custom version of `readdirSync` that reads from the special mocked out // file list set via __setMockFiles function readdirSync(directoryPath) { return mockFiles[directoryPath] || []; } fs.__setMockFiles = __setMockFiles; fs.readdirSync = readdirSync; module.exports = fs; ``` \_\_mocks\_\_/fs.js Now we write our test. Note that we need to explicitly tell that we want to mock the `fs` module because it’s a core Node module: ``` 'use strict'; jest.mock('fs'); describe('listFilesInDirectorySync', () => { const MOCK_FILE_INFO = { '/path/to/file1.js': 'console.log("file1 contents");', '/path/to/file2.txt': 'file2 contents', }; beforeEach(() => { // Set up some mocked out file info before each test require('fs').__setMockFiles(MOCK_FILE_INFO); }); test('includes all files in the directory in the summary', () => { const FileSummarizer = require('../FileSummarizer'); const fileSummary = FileSummarizer.summarizeFilesInDirectorySync('/path/to'); expect(fileSummary.length).toBe(2); }); }); ``` \_\_tests\_\_/FileSummarizer-test.js The example mock shown here uses [`jest.createMockFromModule`](jest-object#jestcreatemockfrommodulemodulename) to generate an automatic mock, and overrides its default behavior. This is the recommended approach, but is completely optional. If you do not want to use the automatic mock at all, you can export your own functions from the mock file. One downside to fully manual mocks is that they're manual – meaning you have to manually update them any time the module they are mocking changes. Because of this, it's best to use or extend the automatic mock when it works for your needs. To ensure that a manual mock and its real implementation stay in sync, it might be useful to require the real module using [`jest.requireActual(moduleName)`](jest-object#jestrequireactualmodulename) in your manual mock and amending it with mock functions before exporting it. The code for this example is available at [examples/manual-mocks](https://github.com/facebook/jest/tree/main/examples/manual-mocks). Using with ES module imports ---------------------------- If you're using [ES module imports](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import) then you'll normally be inclined to put your `import` statements at the top of the test file. But often you need to instruct Jest to use a mock before modules use it. For this reason, Jest will automatically hoist `jest.mock` calls to the top of the module (before any imports). To learn more about this and see it in action, see [this repo](https://github.com/kentcdodds/how-jest-mocking-works). Mocking methods which are not implemented in JSDOM -------------------------------------------------- If some code uses a method which JSDOM (the DOM implementation used by Jest) hasn't implemented yet, testing it is not easily possible. This is e.g. the case with `window.matchMedia()`. Jest returns `TypeError: window.matchMedia is not a function` and doesn't properly execute the test. In this case, mocking `matchMedia` in the test file should solve the issue: ``` Object.defineProperty(window, 'matchMedia', { writable: true, value: jest.fn().mockImplementation(query => ({ matches: false, media: query, onchange: null, addListener: jest.fn(), // deprecated removeListener: jest.fn(), // deprecated addEventListener: jest.fn(), removeEventListener: jest.fn(), dispatchEvent: jest.fn(), })), }); ``` This works if `window.matchMedia()` is used in a function (or method) which is invoked in the test. If `window.matchMedia()` is executed directly in the tested file, Jest reports the same error. In this case, the solution is to move the manual mock into a separate file and include this one in the test **before** the tested file: ``` import './matchMedia.mock'; // Must be imported before the tested file import {myMethod} from './file-to-test'; describe('myMethod()', () => { // Test the method here... }); ```
programming_docs
jest Testing React Apps Testing React Apps ================== At Facebook, we use Jest to test [React](https://reactjs.org/) applications. Setup ----- ### Setup with Create React App If you are new to React, we recommend using [Create React App](https://create-react-app.dev/). It is ready to use and [ships with Jest](https://create-react-app.dev/docs/running-tests/#docsNav)! You will only need to add `react-test-renderer` for rendering snapshots. Run * npm * Yarn ``` npm install --save-dev react-test-renderer ``` ``` yarn add --dev react-test-renderer ``` ### Setup without Create React App If you have an existing application you'll need to install a few packages to make everything work well together. We are using the `babel-jest` package and the `react` babel preset to transform our code inside of the test environment. Also see [using babel](getting-started#using-babel). Run * npm * Yarn ``` npm install --save-dev jest babel-jest @babel/preset-env @babel/preset-react react-test-renderer ``` ``` yarn add --dev jest babel-jest @babel/preset-env @babel/preset-react react-test-renderer ``` Your `package.json` should look something like this (where `<current-version>` is the actual latest version number for the package). Please add the scripts and jest configuration entries: ``` { "dependencies": { "react": "<current-version>", "react-dom": "<current-version>" }, "devDependencies": { "@babel/preset-env": "<current-version>", "@babel/preset-react": "<current-version>", "babel-jest": "<current-version>", "jest": "<current-version>", "react-test-renderer": "<current-version>" }, "scripts": { "test": "jest" } } ``` ``` module.exports = { presets: [ '@babel/preset-env', ['@babel/preset-react', {runtime: 'automatic'}], ], }; ``` babel.config.js **And you're good to go!** ### Snapshot Testing Let's create a [snapshot test](snapshot-testing) for a Link component that renders hyperlinks: ``` import {useState} from 'react'; const STATUS = { HOVERED: 'hovered', NORMAL: 'normal', }; export default function Link({page, children}) { const [status, setStatus] = useState(STATUS.NORMAL); const onMouseEnter = () => { setStatus(STATUS.HOVERED); }; const onMouseLeave = () => { setStatus(STATUS.NORMAL); }; return ( <a className={status} href={page || '#'} onMouseEnter={onMouseEnter} onMouseLeave={onMouseLeave} > {children} </a> ); } ``` Link.js > Note: Examples are using Function components, but Class components can be tested in the same way. See [React: Function and Class Components](https://reactjs.org/docs/components-and-props.html#function-and-class-components). **Reminders** that with Class components, we expect Jest to be used to test props and not methods directly. > > Now let's use React's test renderer and Jest's snapshot feature to interact with the component and capture the rendered output and create a snapshot file: ``` import renderer from 'react-test-renderer'; import Link from '../Link'; it('changes the class when hovered', () => { const component = renderer.create( <Link page="http://www.facebook.com">Facebook</Link>, ); let tree = component.toJSON(); expect(tree).toMatchSnapshot(); // manually trigger the callback renderer.act(() => { tree.props.onMouseEnter(); }); // re-rendering tree = component.toJSON(); expect(tree).toMatchSnapshot(); // manually trigger the callback renderer.act(() => { tree.props.onMouseLeave(); }); // re-rendering tree = component.toJSON(); expect(tree).toMatchSnapshot(); }); ``` Link.test.js When you run `yarn test` or `jest`, this will produce an output file like this: ``` exports[`changes the class when hovered 1`] = ` <a className="normal" href="http://www.facebook.com" onMouseEnter={[Function]} onMouseLeave={[Function]} > Facebook </a> `; exports[`changes the class when hovered 2`] = ` <a className="hovered" href="http://www.facebook.com" onMouseEnter={[Function]} onMouseLeave={[Function]} > Facebook </a> `; exports[`changes the class when hovered 3`] = ` <a className="normal" href="http://www.facebook.com" onMouseEnter={[Function]} onMouseLeave={[Function]} > Facebook </a> `; ``` \_\_tests\_\_/\_\_snapshots\_\_/Link.test.js.snap The next time you run the tests, the rendered output will be compared to the previously created snapshot. The snapshot should be committed along with code changes. When a snapshot test fails, you need to inspect whether it is an intended or unintended change. If the change is expected you can invoke Jest with `jest -u` to overwrite the existing snapshot. The code for this example is available at [examples/snapshot](https://github.com/facebook/jest/tree/main/examples/snapshot). #### Snapshot Testing with Mocks, Enzyme and React 16+ There's a caveat around snapshot testing when using Enzyme and React 16+. If you mock out a module using the following style: ``` jest.mock('../SomeDirectory/SomeComponent', () => 'SomeComponent'); ``` Then you will see warnings in the console: ``` Warning: <SomeComponent /> is using uppercase HTML. Always use lowercase HTML tags in React. # Or: Warning: The tag <SomeComponent> is unrecognized in this browser. If you meant to render a React component, start its name with an uppercase letter. ``` React 16 triggers these warnings due to how it checks element types, and the mocked module fails these checks. Your options are: 1. Render as text. This way you won't see the props passed to the mock component in the snapshot, but it's straightforward: ``` jest.mock('./SomeComponent', () => () => 'SomeComponent'); ``` 2. Render as a custom element. DOM "custom elements" aren't checked for anything and shouldn't fire warnings. They are lowercase and have a dash in the name. ``` jest.mock('./Widget', () => () => <mock-widget />); ``` 3. Use `react-test-renderer`. The test renderer doesn't care about element types and will happily accept e.g. `SomeComponent`. You could check snapshots using the test renderer, and check component behavior separately using Enzyme. 4. Disable warnings all together (should be done in your jest setup file): ``` jest.mock('fbjs/lib/warning', () => require('fbjs/lib/emptyFunction')); ``` This shouldn't normally be your option of choice as useful warnings could be lost. However, in some cases, for example when testing react-native's components we are rendering react-native tags into the DOM and many warnings are irrelevant. Another option is to swizzle the console.warn and suppress specific warnings. ### DOM Testing If you'd like to assert, and manipulate your rendered components you can use [react-testing-library](https://github.com/kentcdodds/react-testing-library), [Enzyme](https://enzymejs.github.io/enzyme/), or React's [TestUtils](https://reactjs.org/docs/test-utils.html). The following two examples use react-testing-library and Enzyme. #### react-testing-library * npm * Yarn ``` npm install --save-dev @testing-library/react ``` ``` yarn add --dev @testing-library/react ``` Let's implement a checkbox which swaps between two labels: ``` import {useState} from 'react'; export default function CheckboxWithLabel({labelOn, labelOff}) { const [isChecked, setIsChecked] = useState(false); const onChange = () => { setIsChecked(!isChecked); }; return ( <label> <input type="checkbox" checked={isChecked} onChange={onChange} /> {isChecked ? labelOn : labelOff} </label> ); } ``` CheckboxWithLabel.js ``` import {cleanup, fireEvent, render} from '@testing-library/react'; import CheckboxWithLabel from '../CheckboxWithLabel'; // Note: running cleanup afterEach is done automatically for you in @testing-library/[email protected] or higher // unmount and cleanup DOM after the test is finished. afterEach(cleanup); it('CheckboxWithLabel changes the text after click', () => { const {queryByLabelText, getByLabelText} = render( <CheckboxWithLabel labelOn="On" labelOff="Off" />, ); expect(queryByLabelText(/off/i)).toBeTruthy(); fireEvent.click(getByLabelText(/off/i)); expect(queryByLabelText(/on/i)).toBeTruthy(); }); ``` \_\_tests\_\_/CheckboxWithLabel-test.js The code for this example is available at [examples/react-testing-library](https://github.com/facebook/jest/tree/main/examples/react-testing-library). #### Enzyme * npm * Yarn ``` npm install --save-dev enzyme ``` ``` yarn add --dev enzyme ``` If you are using a React version below 15.5.0, you will also need to install `react-addons-test-utils`. Let's rewrite the test from above using Enzyme instead of react-testing-library. We use Enzyme's [shallow renderer](https://enzymejs.github.io/enzyme/docs/api/shallow.html) in this example. ``` import Enzyme, {shallow} from 'enzyme'; import Adapter from 'enzyme-adapter-react-16'; import CheckboxWithLabel from '../CheckboxWithLabel'; Enzyme.configure({adapter: new Adapter()}); it('CheckboxWithLabel changes the text after click', () => { // Render a checkbox with label in the document const checkbox = shallow(<CheckboxWithLabel labelOn="On" labelOff="Off" />); expect(checkbox.text()).toEqual('Off'); checkbox.find('input').simulate('change'); expect(checkbox.text()).toEqual('On'); }); ``` \_\_tests\_\_/CheckboxWithLabel-test.js The code for this example is available at [examples/enzyme](https://github.com/facebook/jest/tree/main/examples/enzyme). ### Custom transformers If you need more advanced functionality, you can also build your own transformer. Instead of using `babel-jest`, here is an example of using `@babel/core`: ``` 'use strict'; const {transform} = require('@babel/core'); const jestPreset = require('babel-preset-jest'); module.exports = { process(src, filename) { const result = transform(src, { filename, presets: [jestPreset], }); return result || src; }, }; ``` custom-transformer.js Don't forget to install the `@babel/core` and `babel-preset-jest` packages for this example to work. To make this work with Jest you need to update your Jest configuration with this: `"transform": {"\\.js$": "path/to/custom-transformer.js"}`. If you'd like to build a transformer with babel support, you can also use `babel-jest` to compose one and pass in your custom configuration options: ``` const babelJest = require('babel-jest'); module.exports = babelJest.createTransformer({ presets: ['my-custom-preset'], }); ``` See [dedicated docs](code-transformation#writing-custom-transformers) for more details. jest Migrating to Jest Migrating to Jest ================= If you'd like to try out Jest with an existing codebase, there are a number of ways to convert to Jest: * If you are using Jasmine, or a Jasmine like API (for example [Mocha](https://mochajs.org)), Jest should be mostly compatible, which makes it less complicated to migrate to. * If you are using AVA, Expect.js (by Automattic), Jasmine, Mocha, proxyquire, Should.js or Tape you can automatically migrate with Jest Codemods (see below). * If you like [chai](http://chaijs.com/), you can upgrade to Jest and continue using chai. However, we recommend trying out Jest's assertions and their failure messages. Jest Codemods can migrate from chai (see below). jest-codemods ------------- If you are using [AVA](https://github.com/avajs/ava), [Chai](https://github.com/chaijs/chai), [Expect.js (by Automattic)](https://github.com/Automattic/expect.js), [Jasmine](https://github.com/jasmine/jasmine), [Mocha](https://github.com/mochajs/mocha), [proxyquire](https://github.com/thlorenz/proxyquire), [Should.js](https://github.com/shouldjs/should.js), [Tape](https://github.com/substack/tape), or [Sinon](https://sinonjs.org/) you can use the third-party [jest-codemods](https://github.com/skovhus/jest-codemods) to do most of the dirty migration work. It runs a code transformation on your codebase using [jscodeshift](https://github.com/facebook/jscodeshift). To transform your existing tests, navigate to the project containing the tests and run: ``` npx jest-codemods ``` More information can be found at <https://github.com/skovhus/jest-codemods>. jest Environment Variables Environment Variables ===================== Jest sets the following environment variables: ### `NODE_ENV` Set to `'test'` if it's not already set to something else. ### `JEST_WORKER_ID` Each worker process is assigned a unique id (index-based that starts with `1`). This is set to `1` for all tests when [`runInBand`](cli#--runinband) is set to true. jest Code Transformation Code Transformation =================== Jest runs the code in your project as JavaScript, but if you use some syntax not supported by Node out of the box (such as JSX, TypeScript, Vue templates) then you'll need to transform that code into plain JavaScript, similar to what you would do when building for browsers. Jest supports this via the [`transform`](configuration#transform-objectstring-pathtotransformer--pathtotransformer-object) configuration option. A transformer is a module that provides a method for transforming source files. For example, if you wanted to be able to use a new language feature in your modules or tests that aren't yet supported by Node, you might plug in a code preprocessor that would transpile a future version of JavaScript to a current one. Jest will cache the result of a transformation and attempt to invalidate that result based on a number of factors, such as the source of the file being transformed and changing configuration. Defaults -------- Jest ships with one transformer out of the box – [`babel-jest`](https://github.com/facebook/jest/tree/main/packages/babel-jest#setup). It will load your project's Babel configuration and transform any file matching the `/\.[jt]sx?$/` RegExp (in other words, any `.js`, `.jsx`, `.ts` or `.tsx` file). In addition, `babel-jest` will inject the Babel plugin necessary for mock hoisting talked about in [ES Module mocking](manual-mocks#using-with-es-module-imports). tip Remember to include the default `babel-jest` transformer explicitly, if you wish to use it alongside with additional code preprocessors: ``` "transform": { "\\.[jt]sx?$": "babel-jest", "\\.css$": "some-css-transformer", } ``` Writing custom transformers --------------------------- You can write your own transformer. The API of a transformer is as follows: ``` interface TransformOptions<TransformerConfig = unknown> { supportsDynamicImport: boolean; supportsExportNamespaceFrom: boolean; supportsStaticESM: boolean; supportsTopLevelAwait: boolean; instrument: boolean; /** Cached file system which is used by `jest-runtime` to improve performance. */ cacheFS: Map<string, string>; /** Jest configuration of currently running project. */ config: ProjectConfig; /** Stringified version of the `config` - useful in cache busting. */ configString: string; /** Transformer configuration passed through `transform` option by the user. */ transformerConfig: TransformerConfig; } type TransformedSource = { code: string; map?: RawSourceMap | string | null; }; interface SyncTransformer<TransformerConfig = unknown> { canInstrument?: boolean; getCacheKey?: ( sourceText: string, sourcePath: string, options: TransformOptions<TransformerConfig>, ) => string; getCacheKeyAsync?: ( sourceText: string, sourcePath: string, options: TransformOptions<TransformerConfig>, ) => Promise<string>; process: ( sourceText: string, sourcePath: string, options: TransformOptions<TransformerConfig>, ) => TransformedSource; processAsync?: ( sourceText: string, sourcePath: string, options: TransformOptions<TransformerConfig>, ) => Promise<TransformedSource>; } interface AsyncTransformer<TransformerConfig = unknown> { canInstrument?: boolean; getCacheKey?: ( sourceText: string, sourcePath: string, options: TransformOptions<TransformerConfig>, ) => string; getCacheKeyAsync?: ( sourceText: string, sourcePath: string, options: TransformOptions<TransformerConfig>, ) => Promise<string>; process?: ( sourceText: string, sourcePath: string, options: TransformOptions<TransformerConfig>, ) => TransformedSource; processAsync: ( sourceText: string, sourcePath: string, options: TransformOptions<TransformerConfig>, ) => Promise<TransformedSource>; } type Transformer<TransformerConfig = unknown> = | SyncTransformer<TransformerConfig> | AsyncTransformer<TransformerConfig>; type TransformerCreator< X extends Transformer<TransformerConfig>, TransformerConfig = unknown, > = (transformerConfig?: TransformerConfig) => X; type TransformerFactory<X extends Transformer> = { createTransformer: TransformerCreator<X>; }; ``` note The definitions above were trimmed down for brevity. Full code can be found in [Jest repo on GitHub](https://github.com/facebook/jest/blob/main/packages/jest-transform/src/types.ts) (remember to choose the right tag/commit for your version of Jest). There are a couple of ways you can import code into Jest - using Common JS (`require`) or ECMAScript Modules (`import` - which exists in static and dynamic versions). Jest passes files through code transformation on demand (for instance when a `require` or `import` is evaluated). This process, also known as "transpilation", might happen *synchronously* (in the case of `require`), or *asynchronously* (in the case of `import` or `import()`, the latter of which also works from Common JS modules). For this reason, the interface exposes both pairs of methods for asynchronous and synchronous processes: `process{Async}` and `getCacheKey{Async}`. The latter is called to figure out if we need to call `process{Async}` at all. Since async transformation can happen synchronously without issue, it's possible for the async case to "fall back" to the sync variant, but not vice versa. So if your code base is ESM only implementing the async variants is sufficient. Otherwise, if any code is loaded through `require` (including `createRequire` from within ESM), then you need to implement the synchronous variant. Be aware that `node_modules` is not transpiled with default config. Semi-related to this are the supports flags we pass (see `CallerTransformOptions` above), but those should be used within the transform to figure out if it should return ESM or CJS, and has no direct bearing on sync vs async Though not required, we *highly recommend* implementing `getCacheKey` as well, so we do not waste resources transpiling when we could have read its previous result from disk. You can use [`@jest/create-cache-key-function`](https://www.npmjs.com/package/@jest/create-cache-key-function) to help implement it. Instead of having your custom transformer implement the `Transformer` interface directly, you can choose to export `createTransformer`, a factory function to dynamically create transformers. This is to allow having a transformer config in your jest config. Note that [ECMAScript module](ecmascript-modules) support is indicated by the passed in `supports*` options. Specifically `supportsDynamicImport: true` means the transformer can return `import()` expressions, which is supported by both ESM and CJS. If `supportsStaticESM: true` it means top level `import` statements are supported and the code will be interpreted as ESM and not CJS. See [Node's docs](https://nodejs.org/api/esm.html#esm_differences_between_es_modules_and_commonjs) for details on the differences. tip Make sure `process{Async}` method returns source map alongside with transformed code, so it is possible to report line information accurately in code coverage and test errors. Inline source maps also work but are slower. During the development of a transformer it can be useful to run Jest with `--no-cache` to frequently [delete cache](troubleshooting#caching-issues). ### Examples ### TypeScript with type checking While `babel-jest` by default will transpile TypeScript files, Babel will not verify the types. If you want that you can use [`ts-jest`](https://github.com/kulshekhar/ts-jest). #### Transforming images to their path Importing images is a way to include them in your browser bundle, but they are not valid JavaScript. One way of handling it in Jest is to replace the imported value with its filename. ``` const path = require('path'); module.exports = { process(sourceText, sourcePath, options) { return { code: `module.exports = ${JSON.stringify(path.basename(sourcePath))};`, }; }, }; ``` fileTransformer.js ``` module.exports = { transform: { '\\.(jpg|jpeg|png|gif|eot|otf|webp|svg|ttf|woff|woff2|mp4|webm|wav|mp3|m4a|aac|oga)$': '<rootDir>/fileTransformer.js', }, }; ``` jest.config.js
programming_docs
jest Testing React Native Apps Testing React Native Apps ========================= At Facebook, we use Jest to test [React Native](https://reactnative.dev/) applications. Get a deeper insight into testing a working React Native app example by reading the following series: [Part 1: Jest – Snapshot come into play](https://callstack.com/blog/testing-react-native-with-the-new-jest-part-1-snapshots-come-into-play/) and [Part 2: Jest – Redux Snapshots for your Actions and Reducers](https://callstack.com/blog/testing-react-native-with-the-new-jest-part-2-redux-snapshots-for-your-actions-and-reducers/). Setup ----- Starting from react-native version 0.38, a Jest setup is included by default when running `react-native init`. The following configuration should be automatically added to your package.json file: ``` { "scripts": { "test": "jest" }, "jest": { "preset": "react-native" } } ``` *Note: If you are upgrading your react-native application and previously used the `jest-react-native` preset, remove the dependency from your `package.json` file and change the preset to `react-native` instead.* Run `yarn test` to run tests with Jest. Snapshot Test ------------- Let's create a [snapshot test](snapshot-testing) for a small intro component with a few views and text components and some styles: ``` import React, {Component} from 'react'; import {StyleSheet, Text, View} from 'react-native'; class Intro extends Component { render() { return ( <View style={styles.container}> <Text style={styles.welcome}>Welcome to React Native!</Text> <Text style={styles.instructions}> This is a React Native snapshot test. </Text> </View> ); } } const styles = StyleSheet.create({ container: { alignItems: 'center', backgroundColor: '#F5FCFF', flex: 1, justifyContent: 'center', }, instructions: { color: '#333333', marginBottom: 5, textAlign: 'center', }, welcome: { fontSize: 20, margin: 10, textAlign: 'center', }, }); export default Intro; ``` Intro.js Now let's use React's test renderer and Jest's snapshot feature to interact with the component and capture the rendered output and create a snapshot file: ``` import React from 'react'; import renderer from 'react-test-renderer'; import Intro from '../Intro'; test('renders correctly', () => { const tree = renderer.create(<Intro />).toJSON(); expect(tree).toMatchSnapshot(); }); ``` \_\_tests\_\_/Intro-test.js When you run `yarn test` or `jest`, this will produce an output file like this: ``` exports[`Intro renders correctly 1`] = ` <View style={ Object { "alignItems": "center", "backgroundColor": "#F5FCFF", "flex": 1, "justifyContent": "center", } }> <Text style={ Object { "fontSize": 20, "margin": 10, "textAlign": "center", } }> Welcome to React Native! </Text> <Text style={ Object { "color": "#333333", "marginBottom": 5, "textAlign": "center", } }> This is a React Native snapshot test. </Text> </View> `; ``` \_\_tests\_\_/\_\_snapshots\_\_/Intro-test.js.snap The next time you run the tests, the rendered output will be compared to the previously created snapshot. The snapshot should be committed along with code changes. When a snapshot test fails, you need to inspect whether it is an intended or unintended change. If the change is expected you can invoke Jest with `jest -u` to overwrite the existing snapshot. The code for this example is available at [examples/react-native](https://github.com/facebook/jest/tree/main/examples/react-native). Preset configuration -------------------- The preset sets up the environment and is very opinionated and based on what we found to be useful at Facebook. All of the configuration options can be overwritten just as they can be customized when no preset is used. ### Environment `react-native` ships with a Jest preset, so the `jest.preset` field of your `package.json` should point to `react-native`. The preset is a node environment that mimics the environment of a React Native app. Because it doesn't load any DOM or browser APIs, it greatly improves Jest's startup time. ### transformIgnorePatterns customization The [`transformIgnorePatterns`](configuration#transformignorepatterns-arraystring) option can be used to specify which files shall be transformed by Babel. Many `react-native` npm modules unfortunately don't pre-compile their source code before publishing. By default the `jest-react-native` preset only processes the project's own source files and `react-native`. If you have npm dependencies that have to be transformed you can customize this configuration option by including modules other than `react-native` by grouping them and separating them with the `|` operator: ``` { "transformIgnorePatterns": [ "node_modules/(?!(react-native|my-project|react-native-button)/)" ] } ``` You can test which paths would match (and thus be excluded from transformation) with a tool [like this](https://regex101.com/r/JsLIDM/1). `transformIgnorePatterns` will exclude a file from transformation if the path matches against **any** pattern provided. Splitting into multiple patterns could therefore have unintended results if you are not careful. In the example below, the exclusion (also known as a negative lookahead assertion) for `foo` and `bar` cancel each other out: ``` { "transformIgnorePatterns": ["node_modules/(?!foo/)", "node_modules/(?!bar/)"] // not what you want } ``` ### setupFiles If you'd like to provide additional configuration for every test file, the [`setupFiles` configuration option](configuration#setupfiles-array) can be used to specify setup scripts. ### moduleNameMapper The [`moduleNameMapper`](configuration#modulenamemapper-objectstring-string--arraystring) can be used to map a module path to a different module. By default the preset maps all images to an image stub module but if a module cannot be found this configuration option can help: ``` { "moduleNameMapper": { "my-module.js": "<rootDir>/path/to/my-module.js" } } ``` Tips ---- ### Mock native modules using jest.mock The Jest preset built into `react-native` comes with a few default mocks that are applied on a react-native repository. However, some react-native components or third party components rely on native code to be rendered. In such cases, Jest's manual mocking system can help to mock out the underlying implementation. For example, if your code depends on a third party native video component called `react-native-video` you might want to stub it out with a manual mock like this: ``` jest.mock('react-native-video', () => 'Video'); ``` This will render the component as `<Video {...props} />` with all of its props in the snapshot output. See also [caveats around Enzyme and React 16](tutorial-react#snapshot-testing-with-mocks-enzyme-and-react-16). Sometimes you need to provide a more complex manual mock. For example if you'd like to forward the prop types or static fields of a native component to a mock, you can return a different React component from a mock through this helper from jest-react-native: ``` jest.mock('path/to/MyNativeComponent', () => { const mockComponent = require('react-native/jest/mockComponent'); return mockComponent('path/to/MyNativeComponent'); }); ``` Or if you'd like to create your own manual mock, you can do something like this: ``` jest.mock('Text', () => { const RealComponent = jest.requireActual('Text'); const React = require('react'); class Text extends React.Component { render() { return React.createElement('Text', this.props, this.props.children); } } Text.propTypes = RealComponent.propTypes; return Text; }); ``` In other cases you may want to mock a native module that isn't a React component. The same technique can be applied. We recommend inspecting the native module's source code and logging the module when running a react native app on a real device and then modeling a manual mock after the real module. If you end up mocking the same modules over and over it is recommended to define these mocks in a separate file and add it to the list of `setupFiles`. jest An Async Example An Async Example ================ First, enable Babel support in Jest as documented in the [Getting Started](getting-started#using-babel) guide. Let's implement a module that fetches user data from an API and returns the user name. ``` import request from './request'; export function getUserName(userID) { return request(`/users/${userID}`).then(user => user.name); } ``` user.js In the above implementation, we expect the `request.js` module to return a promise. We chain a call to `then` to receive the user name. Now imagine an implementation of `request.js` that goes to the network and fetches some user data: ``` const http = require('http'); export default function request(url) { return new Promise(resolve => { // This is an example of an http request, for example to fetch // user data from an API. // This module is being mocked in __mocks__/request.js http.get({path: url}, response => { let data = ''; response.on('data', _data => (data += _data)); response.on('end', () => resolve(data)); }); }); } ``` request.js Because we don't want to go to the network in our test, we are going to create a manual mock for our `request.js` module in the `__mocks__` folder (the folder is case-sensitive, `__MOCKS__` will not work). It could look something like this: ``` const users = { 4: {name: 'Mark'}, 5: {name: 'Paul'}, }; export default function request(url) { return new Promise((resolve, reject) => { const userID = parseInt(url.substr('/users/'.length), 10); process.nextTick(() => users[userID] ? resolve(users[userID]) : reject({ error: `User with ${userID} not found.`, }), ); }); } ``` \_\_mocks\_\_/request.js Now let's write a test for our async functionality. ``` jest.mock('../request'); import * as user from '../user'; // The assertion for a promise must be returned. it('works with promises', () => { expect.assertions(1); return user.getUserName(4).then(data => expect(data).toEqual('Mark')); }); ``` \_\_tests\_\_/user-test.js We call `jest.mock('../request')` to tell Jest to use our manual mock. `it` expects the return value to be a Promise that is going to be resolved. You can chain as many Promises as you like and call `expect` at any time, as long as you return a Promise at the end. `.resolves` ----------- There is a less verbose way using `resolves` to unwrap the value of a fulfilled promise together with any other matcher. If the promise is rejected, the assertion will fail. ``` it('works with resolves', () => { expect.assertions(1); return expect(user.getUserName(5)).resolves.toEqual('Paul'); }); ``` `async`/`await` ---------------- Writing tests using the `async`/`await` syntax is also possible. Here is how you'd write the same examples from before: ``` // async/await can be used. it('works with async/await', async () => { expect.assertions(1); const data = await user.getUserName(4); expect(data).toEqual('Mark'); }); // async/await can also be used with `.resolves`. it('works with async/await and resolves', async () => { expect.assertions(1); await expect(user.getUserName(5)).resolves.toEqual('Paul'); }); ``` To enable async/await in your project, install [`@babel/preset-env`](https://babeljs.io/docs/en/babel-preset-env) and enable the feature in your `babel.config.js` file. Error handling -------------- Errors can be handled using the `.catch` method. Make sure to add `expect.assertions` to verify that a certain number of assertions are called. Otherwise a fulfilled promise would not fail the test: ``` // Testing for async errors using Promise.catch. it('tests error with promises', () => { expect.assertions(1); return user.getUserName(2).catch(e => expect(e).toEqual({ error: 'User with 2 not found.', }), ); }); // Or using async/await. it('tests error with async/await', async () => { expect.assertions(1); try { await user.getUserName(1); } catch (e) { expect(e).toEqual({ error: 'User with 1 not found.', }); } }); ``` `.rejects` ---------- The`.rejects` helper works like the `.resolves` helper. If the promise is fulfilled, the test will automatically fail. `expect.assertions(number)` is not required but recommended to verify that a certain number of [assertions](expect#expectassertionsnumber) are called during a test. It is otherwise easy to forget to `return`/`await` the `.resolves` assertions. ``` // Testing for async errors using `.rejects`. it('tests error with rejects', () => { expect.assertions(1); return expect(user.getUserName(3)).rejects.toEqual({ error: 'User with 3 not found.', }); }); // Or using async/await with `.rejects`. it('tests error with async/await and rejects', async () => { expect.assertions(1); await expect(user.getUserName(3)).rejects.toEqual({ error: 'User with 3 not found.', }); }); ``` The code for this example is available at [examples/async](https://github.com/facebook/jest/tree/main/examples/async). If you'd like to test timers, like `setTimeout`, take a look at the [Timer mocks](timer-mocks) documentation. jest Configuring Jest Configuring Jest ================ The Jest philosophy is to work great by default, but sometimes you just need more configuration power. It is recommended to define the configuration in a dedicated JavaScript, TypeScript or JSON file. The file will be discovered automatically, if it is named `jest.config.js|ts|mjs|cjs|json`. You can use [`--config`](cli#--configpath) flag to pass an explicit path to the file. note Keep in mind that the resulting configuration object must always be JSON-serializable. The configuration file should simply export an object: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { verbose: true, }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { verbose: true, }; export default config; ``` Or a function returning an object: * JavaScript * TypeScript ``` /** @returns {Promise<import('jest').Config>} */ module.exports = async () => { return { verbose: true, }; }; ``` ``` import type {Config} from 'jest'; export default async (): Promise<Config> => { return { verbose: true, }; }; ``` tip To read TypeScript configuration files Jest requires [`ts-node`](https://npmjs.com/package/ts-node). Make sure it is installed in your project. The configuration also can be stored in a JSON file as a plain object: ``` { "bail": 1, "verbose": true } ``` jest.config.json Alternatively Jest's configuration can be defined through the `"jest"` key in the `package.json` of your project: ``` { "name": "my-project", "jest": { "verbose": true } } ``` package.json Options ------- info You can retrieve Jest's defaults from `jest-config` to extend them if needed: * JavaScript * TypeScript ``` const {defaults} = require('jest-config'); /** @type {import('jest').Config} */ const config = { moduleFileExtensions: [...defaults.moduleFileExtensions, 'mts', 'cts'], }; module.exports = config; ``` ``` import type {Config} from 'jest'; import {defaults} from 'jest-config'; const config: Config = { moduleFileExtensions: [...defaults.moduleFileExtensions, 'mts'], }; export default config; ``` * [`automock` [boolean]](#automock-boolean) * [`bail` [number | boolean]](#bail-number--boolean) * [`cacheDirectory` [string]](#cachedirectory-string) * [`clearMocks` [boolean]](#clearmocks-boolean) * [`collectCoverage` [boolean]](#collectcoverage-boolean) * [`collectCoverageFrom` [array]](#collectcoveragefrom-array) * [`coverageDirectory` [string]](#coveragedirectory-string) * [`coveragePathIgnorePatterns` [array<string>]](#coveragepathignorepatterns-arraystring) * [`coverageProvider` [string]](#coverageprovider-string) * [`coverageReporters` [array<string | [string, options]>]](#coveragereporters-arraystring--string-options) * [`coverageThreshold` [object]](#coveragethreshold-object) * [`dependencyExtractor` [string]](#dependencyextractor-string) * [`displayName` [string, object]](#displayname-string-object) * [`errorOnDeprecated` [boolean]](#errorondeprecated-boolean) * [`extensionsToTreatAsEsm` [array<string>]](#extensionstotreatasesm-arraystring) * [`fakeTimers` [object]](#faketimers-object) * [`forceCoverageMatch` [array<string>]](#forcecoveragematch-arraystring) * [`globals` [object]](#globals-object) * [`globalSetup` [string]](#globalsetup-string) * [`globalTeardown` [string]](#globalteardown-string) * [`haste` [object]](#haste-object) * [`injectGlobals` [boolean]](#injectglobals-boolean) * [`maxConcurrency` [number]](#maxconcurrency-number) * [`maxWorkers` [number | string]](#maxworkers-number--string) * [`moduleDirectories` [array<string>]](#moduledirectories-arraystring) * [`moduleFileExtensions` [array<string>]](#modulefileextensions-arraystring) * [`moduleNameMapper` [object<string, string | array<string>>]](#modulenamemapper-objectstring-string--arraystring) * [`modulePathIgnorePatterns` [array<string>]](#modulepathignorepatterns-arraystring) * [`modulePaths` [array<string>]](#modulepaths-arraystring) * [`notify` [boolean]](#notify-boolean) * [`notifyMode` [string]](#notifymode-string) * [`preset` [string]](#preset-string) * [`prettierPath` [string]](#prettierpath-string) * [`projects` [array<string | ProjectConfig>]](#projects-arraystring--projectconfig) * [`reporters` [array<moduleName | [moduleName, options]>]](#reporters-arraymodulename--modulename-options) * [`resetMocks` [boolean]](#resetmocks-boolean) * [`resetModules` [boolean]](#resetmodules-boolean) * [`resolver` [string]](#resolver-string) * [`restoreMocks` [boolean]](#restoremocks-boolean) * [`rootDir` [string]](#rootdir-string) * [`roots` [array<string>]](#roots-arraystring) * [`runner` [string]](#runner-string) * [`sandboxInjectedGlobals` [array<string>]](#sandboxinjectedglobals-arraystring) * [`setupFiles` [array]](#setupfiles-array) * [`setupFilesAfterEnv` [array]](#setupfilesafterenv-array) * [`slowTestThreshold` [number]](#slowtestthreshold-number) * [`snapshotFormat` [object]](#snapshotformat-object) * [`snapshotResolver` [string]](#snapshotresolver-string) * [`snapshotSerializers` [array<string>]](#snapshotserializers-arraystring) * [`testEnvironment` [string]](#testenvironment-string) * [`testEnvironmentOptions` [Object]](#testenvironmentoptions-object) * [`testFailureExitCode` [number]](#testfailureexitcode-number) * [`testMatch` [array<string>]](#testmatch-arraystring) * [`testPathIgnorePatterns` [array<string>]](#testpathignorepatterns-arraystring) * [`testRegex` [string | array<string>]](#testregex-string--arraystring) * [`testResultsProcessor` [string]](#testresultsprocessor-string) * [`testRunner` [string]](#testrunner-string) * [`testSequencer` [string]](#testsequencer-string) * [`testTimeout` [number]](#testtimeout-number) * [`transform` [object<string, pathToTransformer | [pathToTransformer, object]>]](#transform-objectstring-pathtotransformer--pathtotransformer-object) * [`transformIgnorePatterns` [array<string>]](#transformignorepatterns-arraystring) * [`unmockedModulePathPatterns` [array<string>]](#unmockedmodulepathpatterns-arraystring) * [`verbose` [boolean]](#verbose-boolean) * [`watchPathIgnorePatterns` [array<string>]](#watchpathignorepatterns-arraystring) * [`watchPlugins` [array<string | [string, Object]>]](#watchplugins-arraystring--string-object) * [`watchman` [boolean]](#watchman-boolean) * [`workerIdleMemoryLimit` [number|string]](#workeridlememorylimit-numberstring) * [`//` [string]](#-string) Reference --------- ### `automock` [boolean] Default: `false` This option tells Jest that all imported modules in your tests should be mocked automatically. All modules used in your tests will have a replacement implementation, keeping the API surface. Example: ``` export default { authorize: () => 'token', isAuthorized: secret => secret === 'wizard', }; ``` utils.js ``` import utils from '../utils'; test('if utils mocked automatically', () => { // Public methods of `utils` are now mock functions expect(utils.authorize.mock).toBeTruthy(); expect(utils.isAuthorized.mock).toBeTruthy(); // You can provide them with your own implementation // or pass the expected return value utils.authorize.mockReturnValue('mocked_token'); utils.isAuthorized.mockReturnValue(true); expect(utils.authorize()).toBe('mocked_token'); expect(utils.isAuthorized('not_wizard')).toBeTruthy(); }); ``` \_\_tests\_\_/automock.test.js note Node modules are automatically mocked when you have a manual mock in place (e.g.: `__mocks__/lodash.js`). More info [here](manual-mocks#mocking-node-modules). Node.js core modules, like `fs`, are not mocked by default. They can be mocked explicitly, like `jest.mock('fs')`. ### `bail` [number | boolean] Default: `0` By default, Jest runs all tests and produces all errors into the console upon completion. The bail config option can be used here to have Jest stop running tests after `n` failures. Setting bail to `true` is the same as setting bail to `1`. ### `cacheDirectory` [string] Default: `"/tmp/<path>"` The directory where Jest should store its cached dependency information. Jest attempts to scan your dependency tree once (up-front) and cache it in order to ease some of the filesystem churn that needs to happen while running tests. This config option lets you customize where Jest stores that cache data on disk. ### `clearMocks` [boolean] Default: `false` Automatically clear mock calls, instances, contexts and results before every test. Equivalent to calling [`jest.clearAllMocks()`](jest-object#jestclearallmocks) before each test. This does not remove any mock implementation that may have been provided. ### `collectCoverage` [boolean] Default: `false` Indicates whether the coverage information should be collected while executing the test. Because this retrofits all executed files with coverage collection statements, it may significantly slow down your tests. Jest ships with two coverage providers: `babel` (default) and `v8`. See the [`coverageProvider`](#coverageprovider-string) option for more details. info The `babel` and `v8` coverage providers use `/* istanbul ignore next */` and `/* c8 ignore next */` comments to exclude lines from coverage reports, respectively. For more information, you can view the [`istanbuljs` documentation](https://github.com/istanbuljs/nyc#parsing-hints-ignoring-lines) and the [`c8` documentation](https://github.com/bcoe/c8#ignoring-uncovered-lines-functions-and-blocks). ### `collectCoverageFrom` [array] Default: `undefined` An array of [glob patterns](https://github.com/micromatch/micromatch) indicating a set of files for which coverage information should be collected. If a file matches the specified glob pattern, coverage information will be collected for it even if no tests exist for this file and it's never required in the test suite. * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { collectCoverageFrom: [ '**/*.{js,jsx}', '!**/node_modules/**', '!**/vendor/**', ], }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { collectCoverageFrom: [ '**/*.{js,jsx}', '!**/node_modules/**', '!**/vendor/**', ], }; export default config; ``` This will collect coverage information for all the files inside the project's `rootDir`, except the ones that match `**/node_modules/**` or `**/vendor/**`. tip Each glob pattern is applied in the order they are specified in the config. For example `["!**/__tests__/**", "**/*.js"]` will not exclude `__tests__` because the negation is overwritten with the second pattern. In order to make the negated glob work in this example it has to come after `**/*.js`. note This option requires `collectCoverage` to be set to `true` or Jest to be invoked with `--coverage`. Help:If you are seeing coverage output such as... ``` =============================== Coverage summary =============================== Statements : Unknown% ( 0/0 ) Branches : Unknown% ( 0/0 ) Functions : Unknown% ( 0/0 ) Lines : Unknown% ( 0/0 ) ================================================================================ Jest: Coverage data for global was not found. ``` Most likely your glob patterns are not matching any files. Refer to the [micromatch](https://github.com/micromatch/micromatch) documentation to ensure your globs are compatible. ### `coverageDirectory` [string] Default: `undefined` The directory where Jest should output its coverage files. ### `coveragePathIgnorePatterns` [array<string>] Default: `["/node_modules/"]` An array of regexp pattern strings that are matched against all file paths before executing the test. If the file path matches any of the patterns, coverage information will be skipped. These pattern strings match against the full path. Use the `<rootDir>` string token to include the path to your project's root directory to prevent it from accidentally ignoring all of your files in different environments that may have different root directories. Example: `["<rootDir>/build/", "<rootDir>/node_modules/"]`. ### `coverageProvider` [string] Indicates which provider should be used to instrument code for coverage. Allowed values are `babel` (default) or `v8`. Note that using `v8` is considered experimental. This uses V8's builtin code coverage rather than one based on Babel. It is not as well tested, and it has also improved in the last few releases of Node. Using the latest versions of node (v14 at the time of this writing) will yield better results. ### `coverageReporters` [array<string | [string, options]>] Default: `["clover", "json", "lcov", "text"]` A list of reporter names that Jest uses when writing coverage reports. Any [istanbul reporter](https://github.com/istanbuljs/istanbuljs/tree/master/packages/istanbul-reports/lib) can be used. tip Setting this option overwrites the default values. Add `"text"` or `"text-summary"` to see a coverage summary in the console output. Additional options can be passed using the tuple form. For example, you may hide coverage report lines for all fully-covered files: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { coverageReporters: ['clover', 'json', 'lcov', ['text', {skipFull: true}]], }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { coverageReporters: ['clover', 'json', 'lcov', ['text', {skipFull: true}]], }; export default config; ``` For more information about the options object shape refer to `CoverageReporterWithOptions` type in the [type definitions](https://github.com/facebook/jest/tree/main/packages/jest-types/src/Config.ts). ### `coverageThreshold` [object] Default: `undefined` This will be used to configure minimum threshold enforcement for coverage results. Thresholds can be specified as `global`, as a [glob](https://github.com/isaacs/node-glob#glob-primer), and as a directory or file path. If thresholds aren't met, jest will fail. Thresholds specified as a positive number are taken to be the minimum percentage required. Thresholds specified as a negative number represent the maximum number of uncovered entities allowed. For example, with the following configuration jest will fail if there is less than 80% branch, line, and function coverage, or if there are more than 10 uncovered statements: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { coverageThreshold: { global: { branches: 80, functions: 80, lines: 80, statements: -10, }, }, }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { coverageThreshold: { global: { branches: 80, functions: 80, lines: 80, statements: -10, }, }, }; export default config; ``` If globs or paths are specified alongside `global`, coverage data for matching paths will be subtracted from overall coverage and thresholds will be applied independently. Thresholds for globs are applied to all files matching the glob. If the file specified by path is not found, an error is returned. For example, with the following configuration: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { coverageThreshold: { global: { branches: 50, functions: 50, lines: 50, statements: 50, }, './src/components/': { branches: 40, statements: 40, }, './src/reducers/**/*.js': { statements: 90, }, './src/api/very-important-module.js': { branches: 100, functions: 100, lines: 100, statements: 100, }, }, }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { coverageThreshold: { global: { branches: 50, functions: 50, lines: 50, statements: 50, }, './src/components/': { branches: 40, statements: 40, }, './src/reducers/**/*.js': { statements: 90, }, './src/api/very-important-module.js': { branches: 100, functions: 100, lines: 100, statements: 100, }, }, }; export default config; ``` Jest will fail if: * The `./src/components` directory has less than 40% branch or statement coverage. * One of the files matching the `./src/reducers/**/*.js` glob has less than 90% statement coverage. * The `./src/api/very-important-module.js` file has less than 100% coverage. * Every remaining file combined has less than 50% coverage (`global`). ### `dependencyExtractor` [string] Default: `undefined` This option allows the use of a custom dependency extractor. It must be a node module that exports an object with an `extract` function. E.g.: ``` const crypto = require('crypto'); const fs = require('fs'); module.exports = { extract(code, filePath, defaultExtract) { const deps = defaultExtract(code, filePath); // Scan the file and add dependencies in `deps` (which is a `Set`) return deps; }, getCacheKey() { return crypto .createHash('md5') .update(fs.readFileSync(__filename)) .digest('hex'); }, }; ``` The `extract` function should return an iterable (`Array`, `Set`, etc.) with the dependencies found in the code. That module can also contain a `getCacheKey` function to generate a cache key to determine if the logic has changed and any cached artifacts relying on it should be discarded. ### `displayName` [string, object] default: `undefined` Allows for a label to be printed alongside a test while it is running. This becomes more useful in multi-project repositories where there can be many jest configuration files. This visually tells which project a test belongs to. * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { displayName: 'CLIENT', }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { displayName: 'CLIENT', }; export default config; ``` Alternatively, an object with the properties `name` and `color` can be passed. This allows for a custom configuration of the background color of the displayName. `displayName` defaults to white when its value is a string. Jest uses [`chalk`](https://github.com/chalk/chalk) to provide the color. As such, all of the valid options for colors supported by `chalk` are also supported by Jest. * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { displayName: { name: 'CLIENT', color: 'blue', }, }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { displayName: { name: 'CLIENT', color: 'blue', }, }; export default config; ``` ### `errorOnDeprecated` [boolean] Default: `false` Make calling deprecated APIs throw helpful error messages. Useful for easing the upgrade process. ### `extensionsToTreatAsEsm` [array<string>] Default: `[]` Jest will run `.mjs` and `.js` files with nearest `package.json`'s `type` field set to `module` as ECMAScript Modules. If you have any other files that should run with native ESM, you need to specify their file extension here. * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { extensionsToTreatAsEsm: ['.ts'], }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { extensionsToTreatAsEsm: ['.ts'], }; export default config; ``` caution Jest's ESM support is still experimental, see [its docs for more details](ecmascript-modules). ### `fakeTimers` [object] Default: `{}` The fake timers may be useful when a piece of code sets a long timeout that we don't want to wait for in a test. For additional details see [Fake Timers guide](timer-mocks) and [API documentation](jest-object#fake-timers). This option provides the default configuration of fake timers for all tests. Calling `jest.useFakeTimers()` in a test file will use these options or will override them if a configuration object is passed. For example, you can tell Jest to keep the original implementation of `process.nextTick()` and adjust the limit of recursive timers that will be run: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { fakeTimers: { doNotFake: ['nextTick'], timerLimit: 1000, }, }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { fakeTimers: { doNotFake: ['nextTick'], timerLimit: 1000, }, }; export default config; ``` ``` // install fake timers for this file using the options from Jest configuration jest.useFakeTimers(); test('increase the limit of recursive timers for this and following tests', () => { jest.useFakeTimers({timerLimit: 5000}); // ... }); ``` fakeTime.test.js tip Instead of including `jest.useFakeTimers()` in each test file, you can enable fake timers globally for all tests in your Jest configuration: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { fakeTimers: { enableGlobally: true, }, }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { fakeTimers: { enableGlobally: true, }, }; export default config; ``` Configuration options: ``` type FakeableAPI = | 'Date' | 'hrtime' | 'nextTick' | 'performance' | 'queueMicrotask' | 'requestAnimationFrame' | 'cancelAnimationFrame' | 'requestIdleCallback' | 'cancelIdleCallback' | 'setImmediate' | 'clearImmediate' | 'setInterval' | 'clearInterval' | 'setTimeout' | 'clearTimeout'; type ModernFakeTimersConfig = { /** * If set to `true` all timers will be advanced automatically by 20 milliseconds * every 20 milliseconds. A custom time delta may be provided by passing a number. * The default is `false`. */ advanceTimers?: boolean | number; /** * List of names of APIs that should not be faked. The default is `[]`, meaning * all APIs are faked. */ doNotFake?: Array<FakeableAPI>; /** Whether fake timers should be enabled for all test files. The default is `false`. */ enableGlobally?: boolean; /** * Use the old fake timers implementation instead of one backed by `@sinonjs/fake-timers`. * The default is `false`. */ legacyFakeTimers?: boolean; /** Sets current system time to be used by fake timers. The default is `Date.now()`. */ now?: number; /** Maximum number of recursive timers that will be run. The default is `100_000` timers. */ timerLimit?: number; }; ``` Legacy Fake Timers For some reason you might have to use legacy implementation of fake timers. Here is how to enable it globally (additional options are not supported): * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { fakeTimers: { enableGlobally: true, legacyFakeTimers: true, }, }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { fakeTimers: { enableGlobally: true, legacyFakeTimers: true, }, }; export default config; ``` ### `forceCoverageMatch` [array<string>] Default: `['']` Test files are normally ignored from collecting code coverage. With this option, you can overwrite this behavior and include otherwise ignored files in code coverage. For example, if you have tests in source files named with `.t.js` extension as following: ``` export function sum(a, b) { return a + b; } if (process.env.NODE_ENV === 'test') { test('sum', () => { expect(sum(1, 2)).toBe(3); }); } ``` sum.t.js You can collect coverage from those files with setting `forceCoverageMatch`. * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { forceCoverageMatch: ['**/*.t.js'], }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { forceCoverageMatch: ['**/*.t.js'], }; export default config; ``` ### `globals` [object] Default: `{}` A set of global variables that need to be available in all test environments. For example, the following would create a global `__DEV__` variable set to `true` in all test environments: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { globals: { __DEV__: true, }, }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { globals: { __DEV__: true, }, }; export default config; ``` Note that, if you specify a global reference value (like an object or array) here, and some code mutates that value in the midst of running a test, that mutation will *not* be persisted across test runs for other test files. In addition, the `globals` object must be json-serializable, so it can't be used to specify global functions. For that, you should use `setupFiles`. ### `globalSetup` [string] Default: `undefined` This option allows the use of a custom global setup module, which must export a function (it can be sync or async). The function will be triggered once before all test suites and it will receive two arguments: Jest's [`globalConfig`](https://github.com/facebook/jest/blob/main/packages/jest-types/src/Config.ts#L282) and [`projectConfig`](https://github.com/facebook/jest/blob/main/packages/jest-types/src/Config.ts#L347). info A global setup module configured in a project (using multi-project runner) will be triggered only when you run at least one test from this project. Any global variables that are defined through `globalSetup` can only be read in `globalTeardown`. You cannot retrieve globals defined here in your test suites. While code transformation is applied to the linked setup-file, Jest will **not** transform any code in `node_modules`. This is due to the need to load the actual transformers (e.g. `babel` or `typescript`) to perform transformation. ``` module.exports = async function (globalConfig, projectConfig) { console.log(globalConfig.testPathPattern); console.log(projectConfig.cache); // Set reference to mongod in order to close the server during teardown. globalThis.__MONGOD__ = mongod; }; ``` setup.js ``` module.exports = async function (globalConfig, projectConfig) { console.log(globalConfig.testPathPattern); console.log(projectConfig.cache); await globalThis.__MONGOD__.stop(); }; ``` teardown.js ### `globalTeardown` [string] Default: `undefined` This option allows the use of a custom global teardown module which must export a function (it can be sync or async). The function will be triggered once after all test suites and it will receive two arguments: Jest's [`globalConfig`](https://github.com/facebook/jest/blob/main/packages/jest-types/src/Config.ts#L282) and [`projectConfig`](https://github.com/facebook/jest/blob/main/packages/jest-types/src/Config.ts#L347). info A global teardown module configured in a project (using multi-project runner) will be triggered only when you run at least one test from this project. The same caveat concerning transformation of `node_modules` as for `globalSetup` applies to `globalTeardown`. ### `haste` [object] Default: `undefined` This will be used to configure the behavior of `jest-haste-map`, Jest's internal file crawler/cache system. The following options are supported: ``` type HasteConfig = { /** Whether to hash files using SHA-1. */ computeSha1?: boolean; /** The platform to use as the default, e.g. 'ios'. */ defaultPlatform?: string | null; /** Force use of Node's `fs` APIs rather than shelling out to `find` */ forceNodeFilesystemAPI?: boolean; /** * Whether to follow symlinks when crawling for files. * This options cannot be used in projects which use watchman. * Projects with `watchman` set to true will error if this option is set to true. */ enableSymlinks?: boolean; /** Path to a custom implementation of Haste. */ hasteImplModulePath?: string; /** All platforms to target, e.g ['ios', 'android']. */ platforms?: Array<string>; /** Whether to throw on error on module collision. */ throwOnModuleCollision?: boolean; /** Custom HasteMap module */ hasteMapModulePath?: string; /** Whether to retain all files, allowing e.g. search for tests in `node_modules`. */ retainAllFiles?: boolean; }; ``` ### `injectGlobals` [boolean] Default: `true` Insert Jest's globals (`expect`, `test`, `describe`, `beforeEach` etc.) into the global environment. If you set this to `false`, you should import from `@jest/globals`, e.g. ``` import {expect, jest, test} from '@jest/globals'; jest.useFakeTimers(); test('some test', () => { expect(Date.now()).toBe(0); }); ``` note This option is only supported using the default `jest-circus` test runner. ### `maxConcurrency` [number] Default: `5` A number limiting the number of tests that are allowed to run at the same time when using `test.concurrent`. Any test above this limit will be queued and executed once a slot is released. ### `maxWorkers` [number | string] Specifies the maximum number of workers the worker-pool will spawn for running tests. In single run mode, this defaults to the number of the cores available on your machine minus one for the main thread. In watch mode, this defaults to half of the available cores on your machine to ensure Jest is unobtrusive and does not grind your machine to a halt. It may be useful to adjust this in resource limited environments like CIs but the defaults should be adequate for most use-cases. For environments with variable CPUs available, you can use percentage based configuration: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { maxWorkers: '50%', }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { maxWorkers: '50%', }; export default config; ``` ### `moduleDirectories` [array<string>] Default: `["node_modules"]` An array of directory names to be searched recursively up from the requiring module's location. Setting this option will *override* the default, if you wish to still search `node_modules` for packages include it along with any other options: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { moduleDirectories: ['node_modules', 'bower_components'], }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { moduleDirectories: ['node_modules', 'bower_components'], }; export default config; ``` ### `moduleFileExtensions` [array<string>] Default: `["js", "mjs", "cjs", "jsx", "ts", "tsx", "json", "node"]` An array of file extensions your modules use. If you require modules without specifying a file extension, these are the extensions Jest will look for, in left-to-right order. We recommend placing the extensions most commonly used in your project on the left, so if you are using TypeScript, you may want to consider moving "ts" and/or "tsx" to the beginning of the array. ### `moduleNameMapper` [object<string, string | array<string>>] Default: `null` A map from regular expressions to module names or to arrays of module names that allow to stub out resources, like images or styles with a single module. Modules that are mapped to an alias are unmocked by default, regardless of whether automocking is enabled or not. Use `<rootDir>` string token to refer to [`rootDir`](#rootdir-string) value if you want to use file paths. Additionally, you can substitute captured regex groups using numbered backreferences. * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { moduleNameMapper: { '^image![a-zA-Z0-9$_-]+$': 'GlobalImageStub', '^[./a-zA-Z0-9$_-]+\\.png$': '<rootDir>/RelativeImageStub.js', 'module_name_(.*)': '<rootDir>/substituted_module_$1.js', 'assets/(.*)': [ '<rootDir>/images/$1', '<rootDir>/photos/$1', '<rootDir>/recipes/$1', ], }, }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { moduleNameMapper: { '^image![a-zA-Z0-9$_-]+$': 'GlobalImageStub', '^[./a-zA-Z0-9$_-]+\\.png$': '<rootDir>/RelativeImageStub.js', 'module_name_(.*)': '<rootDir>/substituted_module_$1.js', 'assets/(.*)': [ '<rootDir>/images/$1', '<rootDir>/photos/$1', '<rootDir>/recipes/$1', ], }, }; export default config; ``` The order in which the mappings are defined matters. Patterns are checked one by one until one fits. The most specific rule should be listed first. This is true for arrays of module names as well. info If you provide module names without boundaries `^$` it may cause hard to spot errors. E.g. `relay` will replace all modules which contain `relay` as a substring in its name: `relay`, `react-relay` and `graphql-relay` will all be pointed to your stub. ### `modulePathIgnorePatterns` [array<string>] Default: `[]` An array of regexp pattern strings that are matched against all module paths before those paths are to be considered 'visible' to the module loader. If a given module's path matches any of the patterns, it will not be `require()`-able in the test environment. These pattern strings match against the full path. Use the `<rootDir>` string token to include the path to your project's root directory to prevent it from accidentally ignoring all of your files in different environments that may have different root directories. * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { modulePathIgnorePatterns: ['<rootDir>/build/'], }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { modulePathIgnorePatterns: ['<rootDir>/build/'], }; export default config; ``` ### `modulePaths` [array<string>] Default: `[]` An alternative API to setting the `NODE_PATH` env variable, `modulePaths` is an array of absolute paths to additional locations to search when resolving modules. Use the `<rootDir>` string token to include the path to your project's root directory. * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { modulePaths: ['<rootDir>/app/'], }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { modulePaths: ['<rootDir>/app/'], }; export default config; ``` ### `notify` [boolean] Default: `false` Activates native OS notifications for test results. To display the notifications Jest needs [`node-notifier`](https://github.com/mikaelbr/node-notifier) package, which must be installed additionally: * npm * Yarn ``` npm install --save-dev node-notifier ``` ``` yarn add --dev node-notifier ``` tip On macOS, remember to allow notifications from `terminal-notifier` under System Preferences > Notifications & Focus. On Windows, `node-notifier` creates a new start menu entry on the first use and not display the notification. Notifications will be properly displayed on subsequent runs. ### `notifyMode` [string] Default: `failure-change` Specifies notification mode. Requires `notify: true`. #### Modes * `always`: always send a notification. * `failure`: send a notification when tests fail. * `success`: send a notification when tests pass. * `change`: send a notification when the status changed. * `success-change`: send a notification when tests pass or once when it fails. * `failure-change`: send a notification when tests fail or once when it passes. ### `preset` [string] Default: `undefined` A preset that is used as a base for Jest's configuration. A preset should point to an npm module that has a `jest-preset.json`, `jest-preset.js`, `jest-preset.cjs` or `jest-preset.mjs` file at the root. For example, this preset `foo-bar/jest-preset.js` will be configured as follows: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { preset: 'foo-bar', }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { preset: 'foo-bar', }; export default config; ``` Presets may also be relative to filesystem paths: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { preset: './node_modules/foo-bar/jest-preset.js', }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { preset: './node_modules/foo-bar/jest-preset.js', }; export default config; ``` info Note that if you also have specified [`rootDir`](#rootdir-string) that the resolution of this file will be relative to that root directory. ### `prettierPath` [string] Default: `'prettier'` Sets the path to the [`prettier`](https://prettier.io/) node module used to update inline snapshots. ### `projects` [array<string | ProjectConfig>] Default: `undefined` When the `projects` configuration is provided with an array of paths or glob patterns, Jest will run tests in all of the specified projects at the same time. This is great for monorepos or when working on multiple projects at the same time. * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { projects: ['<rootDir>', '<rootDir>/examples/*'], }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { projects: ['<rootDir>', '<rootDir>/examples/*'], }; export default config; ``` This example configuration will run Jest in the root directory as well as in every folder in the examples directory. You can have an unlimited amount of projects running in the same Jest instance. The projects feature can also be used to run multiple configurations or multiple [runners](#runner-string). For this purpose, you can pass an array of configuration objects. For example, to run both tests and ESLint (via [jest-runner-eslint](https://github.com/jest-community/jest-runner-eslint)) in the same invocation of Jest: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { projects: [ { displayName: 'test', }, { displayName: 'lint', runner: 'jest-runner-eslint', testMatch: ['<rootDir>/**/*.js'], }, ], }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { projects: [ { displayName: 'test', }, { displayName: 'lint', runner: 'jest-runner-eslint', testMatch: ['<rootDir>/**/*.js'], }, ], }; export default config; ``` tip When using multi-project runner, it's recommended to add a `displayName` for each project. This will show the `displayName` of a project next to its tests. ### `reporters` [array<moduleName | [moduleName, options]>] Default: `undefined` Use this configuration option to add reporters to Jest. It must be a list of reporter names, additional options can be passed to a reporter using the tuple form: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { reporters: [ 'default', ['<rootDir>/custom-reporter.js', {banana: 'yes', pineapple: 'no'}], ], }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { reporters: [ 'default', ['<rootDir>/custom-reporter.js', {banana: 'yes', pineapple: 'no'}], ], }; export default config; ``` #### Default Reporter If custom reporters are specified, the default Jest reporter will be overridden. If you wish to keep it, `'default'` must be passed as a reporters name: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { reporters: [ 'default', ['jest-junit', {outputDirectory: 'reports', outputName: 'report.xml'}], ], }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { reporters: [ 'default', ['jest-junit', {outputDirectory: 'reports', outputName: 'report.xml'}], ], }; export default config; ``` #### GitHub Actions Reporter If included in the list, the built-in GitHub Actions Reporter will annotate changed files with test failure messages: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { reporters: ['default', 'github-actions'], }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { reporters: ['default', 'github-actions'], }; export default config; ``` #### Summary Reporter Summary reporter prints out summary of all tests. It is a part of default reporter, hence it will be enabled if `'default'` is included in the list. For instance, you might want to use it as stand-alone reporter instead of the default one, or together with [Silent Reporter](https://github.com/rickhanlonii/jest-silent-reporter): * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { reporters: ['jest-silent-reporter', 'summary'], }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { reporters: ['jest-silent-reporter', 'summary'], }; export default config; ``` #### Custom Reporters tip Hungry for reporters? Take a look at long list of [awesome reporters](https://github.com/jest-community/awesome-jest/blob/main/README.md#reporters) from Awesome Jest. Custom reporter module must export a class that takes `globalConfig`, `reporterOptions` and `reporterContext` as constructor arguments and implements at least `onRunComplete()` method (for the full list of methods and argument types see `Reporter` interface in [packages/jest-reporters/src/types.ts](https://github.com/facebook/jest/blob/main/packages/jest-reporters/src/types.ts)): ``` class CustomReporter { constructor(globalConfig, reporterOptions, reporterContext) { this._globalConfig = globalConfig; this._options = reporterOptions; this._context = reporterContext; } onRunComplete(testContexts, results) { console.log('Custom reporter output:'); console.log('global config: ', this._globalConfig); console.log('options for this reporter from Jest config: ', this._options); console.log('reporter context passed from test scheduler: ', this._context); } // Optionally, reporters can force Jest to exit with non zero code by returning // an `Error` from `getLastError()` method. getLastError() { if (this._shouldFail) { return new Error('Custom error reported!'); } } } module.exports = CustomReporter; ``` custom-reporter.js ### `resetMocks` [boolean] Default: `false` Automatically reset mock state before every test. Equivalent to calling [`jest.resetAllMocks()`](jest-object#jestresetallmocks) before each test. This will lead to any mocks having their fake implementations removed but does not restore their initial implementation. ### `resetModules` [boolean] Default: `false` By default, each test file gets its own independent module registry. Enabling `resetModules` goes a step further and resets the module registry before running each individual test. This is useful to isolate modules for every test so that the local module state doesn't conflict between tests. This can be done programmatically using [`jest.resetModules()`](jest-object#jestresetmodules). ### `resolver` [string] Default: `undefined` This option allows the use of a custom resolver. This resolver must be a module that exports *either*: 1. a function expecting a string as the first argument for the path to resolve and an options object as the second argument. The function should either return a path to the module that should be resolved or throw an error if the module can't be found. *or* 2. an object containing `async` and/or `sync` properties. The `sync` property should be a function with the shape explained above, and the `async` property should also be a function that accepts the same arguments, but returns a promise which resolves with the path to the module or rejects with an error. The options object provided to resolvers has the shape: ``` type ResolverOptions = { /** Directory to begin resolving from. */ basedir: string; /** List of export conditions. */ conditions?: Array<string>; /** Instance of default resolver. */ defaultResolver: (path: string, options: ResolverOptions) => string; /** List of file extensions to search in order. */ extensions?: Array<string>; /** List of directory names to be looked up for modules recursively. */ moduleDirectory?: Array<string>; /** List of `require.paths` to use if nothing is found in `node_modules`. */ paths?: Array<string>; /** Allows transforming parsed `package.json` contents. */ packageFilter?: (pkg: PackageJSON, file: string, dir: string) => PackageJSON; /** Allows transforms a path within a package. */ pathFilter?: (pkg: PackageJSON, path: string, relativePath: string) => string; /** Current root directory. */ rootDir?: string; }; ``` tip The `defaultResolver` passed as an option is the Jest default resolver which might be useful when you write your custom one. It takes the same arguments as your custom synchronous one, e.g. `(path, options)` and returns a string or throws. For example, if you want to respect Browserify's [`"browser"` field](https://github.com/browserify/browserify-handbook/blob/master/readme.markdown#browser-field), you can use the following resolver: ``` const browserResolve = require('browser-resolve'); module.exports = browserResolve.sync; ``` resolver.js And add it to Jest configuration: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { resolver: '<rootDir>/resolver.js', }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { resolver: '<rootDir>/resolver.js', }; export default config; ``` By combining `defaultResolver` and `packageFilter` we can implement a `package.json` "pre-processor" that allows us to change how the default resolver will resolve modules. For example, imagine we want to use the field `"module"` if it is present, otherwise fallback to `"main"`: ``` module.exports = (path, options) => { // Call the defaultResolver, so we leverage its cache, error handling, etc. return options.defaultResolver(path, { ...options, // Use packageFilter to process parsed `package.json` before the resolution (see https://www.npmjs.com/package/resolve#resolveid-opts-cb) packageFilter: pkg => { return { ...pkg, // Alter the value of `main` before resolving the package main: pkg.module || pkg.main, }; }, }); }; ``` ### `restoreMocks` [boolean] Default: `false` Automatically restore mock state and implementation before every test. Equivalent to calling [`jest.restoreAllMocks()`](jest-object#jestrestoreallmocks) before each test. This will lead to any mocks having their fake implementations removed and restores their initial implementation. ### `rootDir` [string] Default: The root of the directory containing your Jest [config file](#) *or* the `package.json` *or* the [`pwd`](http://en.wikipedia.org/wiki/Pwd) if no `package.json` is found The root directory that Jest should scan for tests and modules within. If you put your Jest config inside your `package.json` and want the root directory to be the root of your repo, the value for this config param will default to the directory of the `package.json`. Oftentimes, you'll want to set this to `'src'` or `'lib'`, corresponding to where in your repository the code is stored. tip Using `'<rootDir>'` as a string token in any other path-based configuration settings will refer back to this value. For example, if you want a [`setupFiles`](#setupfiles-array) entry to point at the `some-setup.js` file at the root of the project, set its value to: `'<rootDir>/some-setup.js'`. ### `roots` [array<string>] Default: `["<rootDir>"]` A list of paths to directories that Jest should use to search for files in. There are times where you only want Jest to search in a single sub-directory (such as cases where you have a `src/` directory in your repo), but prevent it from accessing the rest of the repo. info While `rootDir` is mostly used as a token to be re-used in other configuration options, `roots` is used by the internals of Jest to locate **test files and source files**. This applies also when searching for manual mocks for modules from `node_modules` (`__mocks__` will need to live in one of the `roots`). By default, `roots` has a single entry `<rootDir>` but there are cases where you may want to have multiple roots within one project, for example `roots: ["<rootDir>/src/", "<rootDir>/tests/"]`. ### `runner` [string] Default: `"jest-runner"` This option allows you to use a custom runner instead of Jest's default test runner. Examples of runners include: * [`jest-runner-eslint`](https://github.com/jest-community/jest-runner-eslint) * [`jest-runner-mocha`](https://github.com/rogeliog/jest-runner-mocha) * [`jest-runner-tsc`](https://github.com/azz/jest-runner-tsc) * [`jest-runner-prettier`](https://github.com/keplersj/jest-runner-prettier) info The `runner` property value can omit the `jest-runner-` prefix of the package name. To write a test-runner, export a class with which accepts `globalConfig` in the constructor, and has a `runTests` method with the signature: ``` async function runTests( tests: Array<Test>, watcher: TestWatcher, onStart: OnTestStart, onResult: OnTestSuccess, onFailure: OnTestFailure, options: TestRunnerOptions, ): Promise<void>; ``` If you need to restrict your test-runner to only run in serial rather than being executed in parallel your class should have the property `isSerial` to be set as `true`. ### `sandboxInjectedGlobals` [array<string>] tip Renamed from `extraGlobals` in Jest 28. Default: `undefined` Test files run inside a [vm](https://nodejs.org/api/vm.html), which slows calls to global context properties (e.g. `Math`). With this option you can specify extra properties to be defined inside the vm for faster lookups. For example, if your tests call `Math` often, you can pass it by setting `sandboxInjectedGlobals`. * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { sandboxInjectedGlobals: ['Math'], }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { sandboxInjectedGlobals: ['Math'], }; export default config; ``` note This option has no effect if you use [native ESM](ecmascript-modules). ### `setupFiles` [array] Default: `[]` A list of paths to modules that run some code to configure or set up the testing environment. Each setupFile will be run once per test file. Since every test runs in its own environment, these scripts will be executed in the testing environment before executing [`setupFilesAfterEnv`](#setupfilesafterenv-array) and before the test code itself. tip If your setup script is a CJS module, it may export an async function. Jest will call the function and await its result. This might be useful to fetch some data asynchronously. If the file is an ESM module, simply use top-level await to achieve the same result. ### `setupFilesAfterEnv` [array] Default: `[]` A list of paths to modules that run some code to configure or set up the testing framework before each test file in the suite is executed. Since [`setupFiles`](#setupfiles-array) executes before the test framework is installed in the environment, this script file presents you the opportunity of running some code immediately after the test framework has been installed in the environment but before the test code itself. In other words, `setupFilesAfterEnv` modules are meant for code which is repeating in each test file. Having the test framework installed makes Jest [globals](api), [`jest` object](jest-object) and [`expect`](expect) accessible in the modules. For example, you can add extra matchers from [`jest-extended`](https://github.com/jest-community/jest-extended) library or call [setup and teardown](setup-teardown) hooks: ``` const matchers = require('jest-extended'); expect.extend(matchers); afterEach(() => { jest.useRealTimers(); }); ``` setup-jest.js * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { setupFilesAfterEnv: ['<rootDir>/setup-jest.js'], }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { setupFilesAfterEnv: ['<rootDir>/setup-matchers.js'], }; export default config; ``` ### `slowTestThreshold` [number] Default: `5` The number of seconds after which a test is considered as slow and reported as such in the results. ### `snapshotFormat` [object] Default: `{escapeString: false, printBasicPrototype: false}` Allows overriding specific snapshot formatting options documented in the [pretty-format readme](https://www.npmjs.com/package/pretty-format#usage-with-options), with the exceptions of `compareKeys` and `plugins`. For example, this config would have the snapshot formatter not print a prefix for "Object" and "Array": * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { snapshotFormat: { printBasicPrototype: false, }, }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { snapshotFormat: { printBasicPrototype: false, }, }; export default config; ``` ``` test('does not show prototypes for object and array inline', () => { const object = { array: [{hello: 'Danger'}], }; expect(object).toMatchInlineSnapshot(` { "array": [ { "hello": "Danger", }, ], } `); }); ``` some.test.js ### `snapshotResolver` [string] Default: `undefined` The path to a module that can resolve test<->snapshot path. This config option lets you customize where Jest stores snapshot files on disk. ``` module.exports = { // resolves from test to snapshot path resolveSnapshotPath: (testPath, snapshotExtension) => testPath.replace('__tests__', '__snapshots__') + snapshotExtension, // resolves from snapshot to test path resolveTestPath: (snapshotFilePath, snapshotExtension) => snapshotFilePath .replace('__snapshots__', '__tests__') .slice(0, -snapshotExtension.length), // Example test path, used for preflight consistency check of the implementation above testPathForConsistencyCheck: 'some/__tests__/example.test.js', }; ``` custom-resolver.js ### `snapshotSerializers` [array<string>] Default: `[]` A list of paths to snapshot serializer modules Jest should use for snapshot testing. Jest has default serializers for built-in JavaScript types, HTML elements (Jest 20.0.0+), ImmutableJS (Jest 20.0.0+) and for React elements. See [snapshot test tutorial](tutorial-react-native#snapshot-test) for more information. ``` module.exports = { serialize(val, config, indentation, depth, refs, printer) { return `Pretty foo: ${printer(val.foo)}`; }, test(val) { return val && Object.prototype.hasOwnProperty.call(val, 'foo'); }, }; ``` custom-serializer.js `printer` is a function that serializes a value using existing plugins. Add `custom-serializer` to your Jest configuration: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { snapshotSerializers: ['path/to/custom-serializer.js'], }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { snapshotSerializers: ['path/to/custom-serializer.js'], }; export default config; ``` Finally tests would look as follows: ``` test(() => { const bar = { foo: { x: 1, y: 2, }, }; expect(bar).toMatchSnapshot(); }); ``` Rendered snapshot: ``` Pretty foo: Object { "x": 1, "y": 2, } ``` To make a dependency explicit instead of implicit, you can call [`expect.addSnapshotSerializer`](expect#expectaddsnapshotserializerserializer) to add a module for an individual test file instead of adding its path to `snapshotSerializers` in Jest configuration. More about serializers API can be found [here](https://github.com/facebook/jest/tree/main/packages/pretty-format/README.md#serialize). ### `testEnvironment` [string] Default: `"node"` The test environment that will be used for testing. The default environment in Jest is a Node.js environment. If you are building a web app, you can use a browser-like environment through [`jsdom`](https://github.com/jsdom/jsdom) instead. By adding a `@jest-environment` docblock at the top of the file, you can specify another environment to be used for all tests in that file: ``` /** * @jest-environment jsdom */ test('use jsdom in this test file', () => { const element = document.createElement('div'); expect(element).not.toBeNull(); }); ``` You can create your own module that will be used for setting up the test environment. The module must export a class with `setup`, `teardown` and `getVmContext` methods. You can also pass variables from this module to your test suites by assigning them to `this.global` object – this will make them available in your test suites as global variables. The constructor is passed [global config](https://github.com/facebook/jest/blob/491e7cb0f2daa8263caccc72d48bdce7ba759b11/packages/jest-types/src/Config.ts#L284) and [project config](https://github.com/facebook/jest/blob/491e7cb0f2daa8263caccc72d48bdce7ba759b11/packages/jest-types/src/Config.ts#L349) as its first argument, and [`testEnvironmentContext`](https://github.com/facebook/jest/blob/491e7cb0f2daa8263caccc72d48bdce7ba759b11/packages/jest-environment/src/index.ts#L13) as its second. The class may optionally expose an asynchronous `handleTestEvent` method to bind to events fired by [`jest-circus`](https://github.com/facebook/jest/tree/main/packages/jest-circus). Normally, `jest-circus` test runner would pause until a promise returned from `handleTestEvent` gets fulfilled, **except for the next events**: `start_describe_definition`, `finish_describe_definition`, `add_hook`, `add_test` or `error` (for the up-to-date list you can look at [SyncEvent type in the types definitions](https://github.com/facebook/jest/tree/main/packages/jest-types/src/Circus.ts)). That is caused by backward compatibility reasons and `process.on('unhandledRejection', callback)` signature, but that usually should not be a problem for most of the use cases. Any docblock pragmas in test files will be passed to the environment constructor and can be used for per-test configuration. If the pragma does not have a value, it will be present in the object with its value set to an empty string. If the pragma is not present, it will not be present in the object. To use this class as your custom environment, refer to it by its full path within the project. For example, if your class is stored in `my-custom-environment.js` in some subfolder of your project, then the annotation might look like this: ``` /** * @jest-environment ./src/test/my-custom-environment */ ``` info TestEnvironment is sandboxed. Each test suite will trigger setup/teardown in their own TestEnvironment. Example: ``` // my-custom-environment const NodeEnvironment = require('jest-environment-node').default; class CustomEnvironment extends NodeEnvironment { constructor(config, context) { super(config, context); console.log(config.globalConfig); console.log(config.projectConfig); this.testPath = context.testPath; this.docblockPragmas = context.docblockPragmas; } async setup() { await super.setup(); await someSetupTasks(this.testPath); this.global.someGlobalObject = createGlobalObject(); // Will trigger if docblock contains @my-custom-pragma my-pragma-value if (this.docblockPragmas['my-custom-pragma'] === 'my-pragma-value') { // ... } } async teardown() { this.global.someGlobalObject = destroyGlobalObject(); await someTeardownTasks(); await super.teardown(); } getVmContext() { return super.getVmContext(); } async handleTestEvent(event, state) { if (event.name === 'test_start') { // ... } } } module.exports = CustomEnvironment; ``` ``` // my-test-suite /** * @jest-environment ./my-custom-environment */ let someGlobalObject; beforeAll(() => { someGlobalObject = globalThis.someGlobalObject; }); ``` ### `testEnvironmentOptions` [Object] Default: `{}` Test environment options that will be passed to the `testEnvironment`. The relevant options depend on the environment. For example, in `jest-environment-jsdom`, you can override options given to [`jsdom`](https://github.com/jsdom/jsdom) such as `{html: "<html lang="zh-cmn-Hant"></html>", url: 'https://jestjs.io/', userAgent: "Agent/007"}`. Both `jest-environment-jsdom` and `jest-environment-node` allow specifying `customExportConditions`, which allow you to control which versions of a library are loaded from `exports` in `package.json`. `jest-environment-jsdom` defaults to `['browser']`. `jest-environment-node` defaults to `['node', 'node-addons']`. These options can also be passed in a docblock, similar to `testEnvironment`. Note that it must be parseable by `JSON.parse`. Example: ``` /** * @jest-environment jsdom * @jest-environment-options {"url": "https://jestjs.io/"} */ test('use jsdom and set the URL in this test file', () => { expect(window.location.href).toBe('https://jestjs.io/'); }); ``` ### `testFailureExitCode` [number] Default: `1` The exit code Jest returns on test failure. info This does not change the exit code in the case of Jest errors (e.g. invalid configuration). ### `testMatch` [array<string>] (default: `[ "**/__tests__/**/*.[jt]s?(x)", "**/?(*.)+(spec|test).[jt]s?(x)" ]`) The glob patterns Jest uses to detect test files. By default it looks for `.js`, `.jsx`, `.ts` and `.tsx` files inside of `__tests__` folders, as well as any files with a suffix of `.test` or `.spec` (e.g. `Component.test.js` or `Component.spec.js`). It will also find files called `test.js` or `spec.js`. See the [micromatch](https://github.com/micromatch/micromatch) package for details of the patterns you can specify. See also [`testRegex` [string | array<string>]](#testregex-string--arraystring), but note that you cannot specify both options. tip Each glob pattern is applied in the order they are specified in the config. For example `["!**/__fixtures__/**", "**/__tests__/**/*.js"]` will not exclude `__fixtures__` because the negation is overwritten with the second pattern. In order to make the negated glob work in this example it has to come after `**/__tests__/**/*.js`. ### `testPathIgnorePatterns` [array<string>] Default: `["/node_modules/"]` An array of regexp pattern strings that are matched against all test paths before executing the test. If the test path matches any of the patterns, it will be skipped. These pattern strings match against the full path. Use the `<rootDir>` string token to include the path to your project's root directory to prevent it from accidentally ignoring all of your files in different environments that may have different root directories. Example: `["<rootDir>/build/", "<rootDir>/node_modules/"]`. ### `testRegex` [string | array<string>] Default: `(/__tests__/.*|(\\.|/)(test|spec))\\.[jt]sx?$` The pattern or patterns Jest uses to detect test files. By default it looks for `.js`, `.jsx`, `.ts` and `.tsx` files inside of `__tests__` folders, as well as any files with a suffix of `.test` or `.spec` (e.g. `Component.test.js` or `Component.spec.js`). It will also find files called `test.js` or `spec.js`. See also [`testMatch` [array<string>]](#testmatch-arraystring), but note that you cannot specify both options. The following is a visualization of the default regex: ``` ├── __tests__ │ └── component.spec.js # test │ └── anything # test ├── package.json # not test ├── foo.test.js # test ├── bar.spec.jsx # test └── component.js # not test ``` info `testRegex` will try to detect test files using the **absolute file path**, therefore, having a folder with a name that matches it will run all the files as tests. ### `testResultsProcessor` [string] Default: `undefined` This option allows the use of a custom results processor. This processor must be a node module that exports a function expecting an object with the following structure as the first argument and return it: ``` { "success": boolean, "startTime": epoch, "numTotalTestSuites": number, "numPassedTestSuites": number, "numFailedTestSuites": number, "numRuntimeErrorTestSuites": number, "numTotalTests": number, "numPassedTests": number, "numFailedTests": number, "numPendingTests": number, "numTodoTests": number, "openHandles": Array<Error>, "testResults": [{ "numFailingTests": number, "numPassingTests": number, "numPendingTests": number, "testResults": [{ "title": string (message in it block), "status": "failed" | "pending" | "passed", "ancestorTitles": [string (message in describe blocks)], "failureMessages": [string], "numPassingAsserts": number, "location": { "column": number, "line": number }, "duration": number | null }, ... ], "perfStats": { "start": epoch, "end": epoch }, "testFilePath": absolute path to test file, "coverage": {} }, "testExecError:" (exists if there was a top-level failure) { "message": string "stack": string } ... ] } ``` `testResultsProcessor` and `reporters` are very similar to each other. One difference is that a test result processor only gets called after all tests finished. Whereas a reporter has the ability to receive test results after individual tests and/or test suites are finished. ### `testRunner` [string] Default: `jest-circus/runner` This option allows the use of a custom test runner. The default is `jest-circus`. A custom test runner can be provided by specifying a path to a test runner implementation. The test runner module must export a function with the following signature: ``` function testRunner( globalConfig: GlobalConfig, config: ProjectConfig, environment: Environment, runtime: Runtime, testPath: string, ): Promise<TestResult>; ``` An example of such function can be found in our default [jasmine2 test runner package](https://github.com/facebook/jest/blob/main/packages/jest-jasmine2/src/index.ts). ### `testSequencer` [string] Default: `@jest/test-sequencer` This option allows you to use a custom sequencer instead of Jest's default. tip Both `sort` and `shard` may optionally return a `Promise`. For example, you may sort test paths alphabetically: ``` const Sequencer = require('@jest/test-sequencer').default; class CustomSequencer extends Sequencer { /** * Select tests for shard requested via --shard=shardIndex/shardCount * Sharding is applied before sorting */ shard(tests, {shardIndex, shardCount}) { const shardSize = Math.ceil(tests.length / shardCount); const shardStart = shardSize * (shardIndex - 1); const shardEnd = shardSize * shardIndex; return [...tests] .sort((a, b) => (a.path > b.path ? 1 : -1)) .slice(shardStart, shardEnd); } /** * Sort test to determine order of execution * Sorting is applied after sharding */ sort(tests) { // Test structure information // https://github.com/facebook/jest/blob/6b8b1404a1d9254e7d5d90a8934087a9c9899dab/packages/jest-runner/src/types.ts#L17-L21 const copyTests = Array.from(tests); return copyTests.sort((testA, testB) => (testA.path > testB.path ? 1 : -1)); } } module.exports = CustomSequencer; ``` custom-sequencer.js Add `custom-sequencer` to your Jest configuration: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { testSequencer: 'path/to/custom-sequencer.js', }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { testSequencer: 'path/to/custom-sequencer.js', }; export default config; ``` ### `testTimeout` [number] Default: `5000` Default timeout of a test in milliseconds. ### `transform` [object<string, pathToTransformer | [pathToTransformer, object]>] Default: `{"\\.[jt]sx?$": "babel-jest"}` A map from regular expressions to paths to transformers. Optionally, a tuple with configuration options can be passed as second argument: `{filePattern: ['path-to-transformer', {options}]}`. For example, here is how you can configure `babel-jest` for non-default behavior: `{'\\.js$': ['babel-jest', {rootMode: 'upward'}]}`. Jest runs the code of your project as JavaScript, hence a transformer is needed if you use some syntax not supported by Node out of the box (such as JSX, TypeScript, Vue templates). By default, Jest will use [`babel-jest`](https://github.com/facebook/jest/tree/main/packages/babel-jest#setup) transformer, which will load your project's Babel configuration and transform any file matching the `/\.[jt]sx?$/` RegExp (in other words, any `.js`, `.jsx`, `.ts` or `.tsx` file). In addition, `babel-jest` will inject the Babel plugin necessary for mock hoisting talked about in [ES Module mocking](manual-mocks#using-with-es-module-imports). See the [Code Transformation](code-transformation) section for more details and instructions on building your own transformer. tip Keep in mind that a transformer only runs once per file unless the file has changed. Remember to include the default `babel-jest` transformer explicitly, if you wish to use it alongside with additional code preprocessors: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { transform: { '\\.[jt]sx?$': 'babel-jest', '\\.css$': 'some-css-transformer', }, }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { transform: { '\\.[jt]sx?$': 'babel-jest', '\\.css$': 'some-css-transformer', }, }; export default config; ``` ### `transformIgnorePatterns` [array<string>] Default: `["/node_modules/", "\\.pnp\\.[^\\\/]+$"]` An array of regexp pattern strings that are matched against all source file paths before transformation. If the file path matches **any** of the patterns, it will not be transformed. Providing regexp patterns that overlap with each other may result in files not being transformed that you expected to be transformed. For example: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { transformIgnorePatterns: ['/node_modules/(?!(foo|bar)/)', '/bar/'], }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { transformIgnorePatterns: ['/node_modules/(?!(foo|bar)/)', '/bar/'], }; export default config; ``` The first pattern will match (and therefore not transform) files inside `/node_modules` except for those in `/node_modules/foo/` and `/node_modules/bar/`. The second pattern will match (and therefore not transform) files inside any path with `/bar/` in it. With the two together, files in `/node_modules/bar/` will not be transformed because it does match the second pattern, even though it was excluded by the first. Sometimes it happens (especially in React Native or TypeScript projects) that 3rd party modules are published as untranspiled code. Since all files inside `node_modules` are not transformed by default, Jest will not understand the code in these modules, resulting in syntax errors. To overcome this, you may use `transformIgnorePatterns` to allow transpiling such modules. You'll find a good example of this use case in [React Native Guide](tutorial-react-native#transformignorepatterns-customization). These pattern strings match against the full path. Use the `<rootDir>` string token to include the path to your project's root directory to prevent it from accidentally ignoring all of your files in different environments that may have different root directories. * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { transformIgnorePatterns: [ '<rootDir>/bower_components/', '<rootDir>/node_modules/', ], }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { transformIgnorePatterns: [ '<rootDir>/bower_components/', '<rootDir>/node_modules/', ], }; export default config; ``` tip If you use `pnpm` and need to convert some packages under `node_modules`, you need to note that the packages in this folder (e.g. `node_modules/package-a/`) have been symlinked to the path under `.pnpm` (e.g. `node_modules/.pnpm/[email protected]/node_modules/pakcage-a/`), so using `<rootdir>/node_modules/(?!(package-a|package-b)/)` directly will not be recognized, while is to use: * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { transformIgnorePatterns: [ '<rootdir>/node_modules/.pnpm/(?!(package-a|package-b)@)', ], }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { transformIgnorePatterns: [ '<rootdir>/node_modules/.pnpm/(?!(package-a|package-b)@)', ], }; export default config; ``` It should be noted that the folder name of pnpm under `.pnpm` is the package name plus `@` and version number, so writing `/` will not be recognized, but using `@` can. ### `unmockedModulePathPatterns` [array<string>] Default: `[]` An array of regexp pattern strings that are matched against all modules before the module loader will automatically return a mock for them. If a module's path matches any of the patterns in this list, it will not be automatically mocked by the module loader. This is useful for some commonly used 'utility' modules that are almost always used as implementation details almost all the time (like underscore/lo-dash, etc). It's generally a best practice to keep this list as small as possible and always use explicit `jest.mock()`/`jest.unmock()` calls in individual tests. Explicit per-test setup is far easier for other readers of the test to reason about the environment the test will run in. It is possible to override this setting in individual tests by explicitly calling `jest.mock()` at the top of the test file. ### `verbose` [boolean] Default: `false` Indicates whether each individual test should be reported during the run. All errors will also still be shown on the bottom after execution. Note that if there is only one test file being run it will default to `true`. ### `watchPathIgnorePatterns` [array<string>] Default: `[]` An array of RegExp patterns that are matched against all source file paths before re-running tests in watch mode. If the file path matches any of the patterns, when it is updated, it will not trigger a re-run of tests. These patterns match against the full path. Use the `<rootDir>` string token to include the path to your project's root directory to prevent it from accidentally ignoring all of your files in different environments that may have different root directories. Example: `["<rootDir>/node_modules/"]`. Even if nothing is specified here, the watcher will ignore changes to the version control folders (.git, .hg). Other hidden files and directories, i.e. those that begin with a dot (`.`), are watched by default. Remember to escape the dot when you add them to `watchPathIgnorePatterns` as it is a special RegExp character. * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { watchPathIgnorePatterns: ['<rootDir>/\\.tmp/', '<rootDir>/bar/'], }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { watchPathIgnorePatterns: ['<rootDir>/\\.tmp/', '<rootDir>/bar/'], }; export default config; ``` ### `watchPlugins` [array<string | [string, Object]>] Default: `[]` This option allows you to use custom watch plugins. Read more about watch plugins [here](watch-plugins). Examples of watch plugins include: * [`jest-watch-master`](https://github.com/rickhanlonii/jest-watch-master) * [`jest-watch-select-projects`](https://github.com/rogeliog/jest-watch-select-projects) * [`jest-watch-suspend`](https://github.com/unional/jest-watch-suspend) * [`jest-watch-typeahead`](https://github.com/jest-community/jest-watch-typeahead) * [`jest-watch-yarn-workspaces`](https://github.com/cameronhunter/jest-watch-directories/tree/master/packages/jest-watch-yarn-workspaces) info The values in the `watchPlugins` property value can omit the `jest-watch-` prefix of the package name. ### `watchman` [boolean] Default: `true` Whether to use [`watchman`](https://facebook.github.io/watchman/) for file crawling. ### `workerIdleMemoryLimit` [number|string] Default: `undefined` Specifies the memory limit for workers before they are recycled and is primarily a work-around for [this issue](https://github.com/facebook/jest/issues/11956); After the worker has executed a test the memory usage of it is checked. If it exceeds the value specified the worker is killed and restarted. The limit can be specified in a number of different ways and whatever the result is `Math.floor` is used to turn it into an integer value: * `<= 1` - The value is assumed to be a percentage of system memory. So 0.5 sets the memory limit of the worker to half of the total system memory * `\> 1` - Assumed to be a fixed byte value. Because of the previous rule if you wanted a value of 1 byte (I don't know why) you could use `1.1`. * With units + `50%` - As above, a percentage of total system memory + `100KB`, `65MB`, etc - With units to denote a fixed memory limit. - `K` / `KB` - Kilobytes (x1000) - `KiB` - Kibibytes (x1024) - `M` / `MB` - Megabytes - `MiB` - Mebibytes - `G` / `GB` - Gigabytes - `GiB` - Gibibytes **NOTE:** [% based memory does not work on Linux CircleCI workers](https://github.com/facebook/jest/issues/11956#issuecomment-1212925677) due to incorrect system memory being reported. * JavaScript * TypeScript ``` /** @type {import('jest').Config} */ const config = { workerIdleMemoryLimit: 0.2, }; module.exports = config; ``` ``` import type {Config} from 'jest'; const config: Config = { workerIdleMemoryLimit: 0.2, }; export default config; ``` ### `//` [string] This option allows comments in `package.json`. Include the comment text as the value of this key: ``` { "name": "my-project", "jest": { "//": "Comment goes here", "verbose": true } } ``` package.json
programming_docs
jest Expect Expect ====== When you're writing tests, you often need to check that values meet certain conditions. `expect` gives you access to a number of "matchers" that let you validate different things. For additional Jest matchers maintained by the Jest Community check out [`jest-extended`](https://github.com/jest-community/jest-extended). Methods ------- * [`expect(value)`](#expectvalue) * [`expect.extend(matchers)`](#expectextendmatchers) * [`expect.anything()`](#expectanything) * [`expect.any(constructor)`](#expectanyconstructor) * [`expect.arrayContaining(array)`](#expectarraycontainingarray) * [`expect.assertions(number)`](#expectassertionsnumber) * [`expect.closeTo(number, numDigits?)`](#expectclosetonumber-numdigits) * [`expect.hasAssertions()`](#expecthasassertions) * [`expect.not.arrayContaining(array)`](#expectnotarraycontainingarray) * [`expect.not.objectContaining(object)`](#expectnotobjectcontainingobject) * [`expect.not.stringContaining(string)`](#expectnotstringcontainingstring) * [`expect.not.stringMatching(string | regexp)`](#expectnotstringmatchingstring--regexp) * [`expect.objectContaining(object)`](#expectobjectcontainingobject) * [`expect.stringContaining(string)`](#expectstringcontainingstring) * [`expect.stringMatching(string | regexp)`](#expectstringmatchingstring--regexp) * [`expect.addSnapshotSerializer(serializer)`](#expectaddsnapshotserializerserializer) * [`.not`](#not) * [`.resolves`](#resolves) * [`.rejects`](#rejects) * [`.toBe(value)`](#tobevalue) * [`.toHaveBeenCalled()`](#tohavebeencalled) * [`.toHaveBeenCalledTimes(number)`](#tohavebeencalledtimesnumber) * [`.toHaveBeenCalledWith(arg1, arg2, ...)`](#tohavebeencalledwitharg1-arg2-) * [`.toHaveBeenLastCalledWith(arg1, arg2, ...)`](#tohavebeenlastcalledwitharg1-arg2-) * [`.toHaveBeenNthCalledWith(nthCall, arg1, arg2, ....)`](#tohavebeennthcalledwithnthcall-arg1-arg2-) * [`.toHaveReturned()`](#tohavereturned) * [`.toHaveReturnedTimes(number)`](#tohavereturnedtimesnumber) * [`.toHaveReturnedWith(value)`](#tohavereturnedwithvalue) * [`.toHaveLastReturnedWith(value)`](#tohavelastreturnedwithvalue) * [`.toHaveNthReturnedWith(nthCall, value)`](#tohaventhreturnedwithnthcall-value) * [`.toHaveLength(number)`](#tohavelengthnumber) * [`.toHaveProperty(keyPath, value?)`](#tohavepropertykeypath-value) * [`.toBeCloseTo(number, numDigits?)`](#tobeclosetonumber-numdigits) * [`.toBeDefined()`](#tobedefined) * [`.toBeFalsy()`](#tobefalsy) * [`.toBeGreaterThan(number | bigint)`](#tobegreaterthannumber--bigint) * [`.toBeGreaterThanOrEqual(number | bigint)`](#tobegreaterthanorequalnumber--bigint) * [`.toBeLessThan(number | bigint)`](#tobelessthannumber--bigint) * [`.toBeLessThanOrEqual(number | bigint)`](#tobelessthanorequalnumber--bigint) * [`.toBeInstanceOf(Class)`](#tobeinstanceofclass) * [`.toBeNull()`](#tobenull) * [`.toBeTruthy()`](#tobetruthy) * [`.toBeUndefined()`](#tobeundefined) * [`.toBeNaN()`](#tobenan) * [`.toContain(item)`](#tocontainitem) * [`.toContainEqual(item)`](#tocontainequalitem) * [`.toEqual(value)`](#toequalvalue) * [`.toMatch(regexp | string)`](#tomatchregexp--string) * [`.toMatchObject(object)`](#tomatchobjectobject) * [`.toMatchSnapshot(propertyMatchers?, hint?)`](#tomatchsnapshotpropertymatchers-hint) * [`.toMatchInlineSnapshot(propertyMatchers?, inlineSnapshot)`](#tomatchinlinesnapshotpropertymatchers-inlinesnapshot) * [`.toStrictEqual(value)`](#tostrictequalvalue) * [`.toThrow(error?)`](#tothrowerror) * [`.toThrowErrorMatchingSnapshot(hint?)`](#tothrowerrormatchingsnapshothint) * [`.toThrowErrorMatchingInlineSnapshot(inlineSnapshot)`](#tothrowerrormatchinginlinesnapshotinlinesnapshot) Reference --------- ### `expect(value)` The `expect` function is used every time you want to test a value. You will rarely call `expect` by itself. Instead, you will use `expect` along with a "matcher" function to assert something about a value. It's easier to understand this with an example. Let's say you have a method `bestLaCroixFlavor()` which is supposed to return the string `'grapefruit'`. Here's how you would test that: ``` test('the best flavor is grapefruit', () => { expect(bestLaCroixFlavor()).toBe('grapefruit'); }); ``` In this case, `toBe` is the matcher function. There are a lot of different matcher functions, documented below, to help you test different things. The argument to `expect` should be the value that your code produces, and any argument to the matcher should be the correct value. If you mix them up, your tests will still work, but the error messages on failing tests will look strange. ### `expect.extend(matchers)` You can use `expect.extend` to add your own matchers to Jest. For example, let's say that you're testing a number utility library and you're frequently asserting that numbers appear within particular ranges of other numbers. You could abstract that into a `toBeWithinRange` matcher: ``` expect.extend({ toBeWithinRange(received, floor, ceiling) { const pass = received >= floor && received <= ceiling; if (pass) { return { message: () => `expected ${received} not to be within range ${floor} - ${ceiling}`, pass: true, }; } else { return { message: () => `expected ${received} to be within range ${floor} - ${ceiling}`, pass: false, }; } }, }); test('numeric ranges', () => { expect(100).toBeWithinRange(90, 110); expect(101).not.toBeWithinRange(0, 100); expect({apples: 6, bananas: 3}).toEqual({ apples: expect.toBeWithinRange(1, 10), bananas: expect.not.toBeWithinRange(11, 20), }); }); ``` note In TypeScript, when using `@types/jest` for example, you can declare the new `toBeWithinRange` matcher in the imported module like this: ``` interface CustomMatchers<R = unknown> { toBeWithinRange(floor: number, ceiling: number): R; } declare global { namespace jest { interface Expect extends CustomMatchers {} interface Matchers<R> extends CustomMatchers<R> {} interface InverseAsymmetricMatchers extends CustomMatchers {} } } ``` #### Async Matchers `expect.extend` also supports async matchers. Async matchers return a Promise so you will need to await the returned value. Let's use an example matcher to illustrate the usage of them. We are going to implement a matcher called `toBeDivisibleByExternalValue`, where the divisible number is going to be pulled from an external source. ``` expect.extend({ async toBeDivisibleByExternalValue(received) { const externalValue = await getExternalValueFromRemoteSource(); const pass = received % externalValue == 0; if (pass) { return { message: () => `expected ${received} not to be divisible by ${externalValue}`, pass: true, }; } else { return { message: () => `expected ${received} to be divisible by ${externalValue}`, pass: false, }; } }, }); test('is divisible by external value', async () => { await expect(100).toBeDivisibleByExternalValue(); await expect(101).not.toBeDivisibleByExternalValue(); }); ``` #### Custom Matchers API Matchers should return an object (or a Promise of an object) with two keys. `pass` indicates whether there was a match or not, and `message` provides a function with no arguments that returns an error message in case of failure. Thus, when `pass` is false, `message` should return the error message for when `expect(x).yourMatcher()` fails. And when `pass` is true, `message` should return the error message for when `expect(x).not.yourMatcher()` fails. Matchers are called with the argument passed to `expect(x)` followed by the arguments passed to `.yourMatcher(y, z)`: ``` expect.extend({ yourMatcher(x, y, z) { return { pass: true, message: () => '', }; }, }); ``` These helper functions and properties can be found on `this` inside a custom matcher: #### `this.isNot` A boolean to let you know this matcher was called with the negated `.not` modifier allowing you to display a clear and correct matcher hint (see example code). #### `this.promise` A string allowing you to display a clear and correct matcher hint: * `'rejects'` if matcher was called with the promise `.rejects` modifier * `'resolves'` if matcher was called with the promise `.resolves` modifier * `''` if matcher was not called with a promise modifier #### `this.equals(a, b)` This is a deep-equality function that will return `true` if two objects have the same values (recursively). #### `this.expand` A boolean to let you know this matcher was called with an `expand` option. When Jest is called with the `--expand` flag, `this.expand` can be used to determine if Jest is expected to show full diffs and errors. #### `this.utils` There are a number of helpful tools exposed on `this.utils` primarily consisting of the exports from [`jest-matcher-utils`](https://github.com/facebook/jest/tree/main/packages/jest-matcher-utils). The most useful ones are `matcherHint`, `printExpected` and `printReceived` to format the error messages nicely. For example, take a look at the implementation for the `toBe` matcher: ``` const {diff} = require('jest-diff'); expect.extend({ toBe(received, expected) { const options = { comment: 'Object.is equality', isNot: this.isNot, promise: this.promise, }; const pass = Object.is(received, expected); const message = pass ? () => // eslint-disable-next-line prefer-template this.utils.matcherHint('toBe', undefined, undefined, options) + '\n\n' + `Expected: not ${this.utils.printExpected(expected)}\n` + `Received: ${this.utils.printReceived(received)}` : () => { const diffString = diff(expected, received, { expand: this.expand, }); return ( // eslint-disable-next-line prefer-template this.utils.matcherHint('toBe', undefined, undefined, options) + '\n\n' + (diffString && diffString.includes('- Expect') ? `Difference:\n\n${diffString}` : `Expected: ${this.utils.printExpected(expected)}\n` + `Received: ${this.utils.printReceived(received)}`) ); }; return {actual: received, message, pass}; }, }); ``` This will print something like this: ``` expect(received).toBe(expected) Expected value to be (using Object.is): "banana" Received: "apple" ``` When an assertion fails, the error message should give as much signal as necessary to the user so they can resolve their issue quickly. You should craft a precise failure message to make sure users of your custom assertions have a good developer experience. #### Custom snapshot matchers To use snapshot testing inside of your custom matcher you can import `jest-snapshot` and use it from within your matcher. Here's a snapshot matcher that trims a string to store for a given length, `.toMatchTrimmedSnapshot(length)`: ``` const {toMatchSnapshot} = require('jest-snapshot'); expect.extend({ toMatchTrimmedSnapshot(received, length) { return toMatchSnapshot.call( this, received.substring(0, length), 'toMatchTrimmedSnapshot', ); }, }); it('stores only 10 characters', () => { expect('extra long string oh my gerd').toMatchTrimmedSnapshot(10); }); /* Stored snapshot will look like: exports[`stores only 10 characters: toMatchTrimmedSnapshot 1`] = `"extra long"`; */ ``` It's also possible to create custom matchers for inline snapshots, the snapshots will be correctly added to the custom matchers. However, inline snapshot will always try to append to the first argument or the second when the first argument is the property matcher, so it's not possible to accept custom arguments in the custom matchers. ``` const {toMatchInlineSnapshot} = require('jest-snapshot'); expect.extend({ toMatchTrimmedInlineSnapshot(received, ...rest) { return toMatchInlineSnapshot.call(this, received.substring(0, 10), ...rest); }, }); it('stores only 10 characters', () => { expect('extra long string oh my gerd').toMatchTrimmedInlineSnapshot(); /* The snapshot will be added inline like expect('extra long string oh my gerd').toMatchTrimmedInlineSnapshot( `"extra long"` ); */ }); ``` #### async If your custom inline snapshot matcher is async i.e. uses `async`-`await` you might encounter an error like "Multiple inline snapshots for the same call are not supported". Jest needs additional context information to find where the custom inline snapshot matcher was used to update the snapshots properly. ``` const {toMatchInlineSnapshot} = require('jest-snapshot'); expect.extend({ async toMatchObservationInlineSnapshot(fn, ...rest) { // The error (and its stacktrace) must be created before any `await` this.error = new Error(); // The implementation of `observe` doesn't matter. // It only matters that the custom snapshot matcher is async. const observation = await observe(async () => { await fn(); }); return toMatchInlineSnapshot.call(this, recording, ...rest); }, }); it('observes something', async () => { await expect(async () => { return 'async action'; }).toMatchTrimmedInlineSnapshot(); /* The snapshot will be added inline like await expect(async () => { return 'async action'; }).toMatchTrimmedInlineSnapshot(`"async action"`); */ }); ``` #### Bail out Usually `jest` tries to match every snapshot that is expected in a test. Sometimes it might not make sense to continue the test if a prior snapshot failed. For example, when you make snapshots of a state-machine after various transitions you can abort the test once one transition produced the wrong state. In that case you can implement a custom snapshot matcher that throws on the first mismatch instead of collecting every mismatch. ``` const {toMatchInlineSnapshot} = require('jest-snapshot'); expect.extend({ toMatchStateInlineSnapshot(...args) { this.dontThrow = () => {}; return toMatchInlineSnapshot.call(this, ...args); }, }); let state = 'initial'; function transition() { // Typo in the implementation should cause the test to fail if (state === 'INITIAL') { state = 'pending'; } else if (state === 'pending') { state = 'done'; } } it('transitions as expected', () => { expect(state).toMatchStateInlineSnapshot(`"initial"`); transition(); // Already produces a mismatch. No point in continuing the test. expect(state).toMatchStateInlineSnapshot(`"loading"`); transition(); expect(state).toMatchStateInlineSnapshot(`"done"`); }); ``` ### `expect.anything()` `expect.anything()` matches anything but `null` or `undefined`. You can use it inside `toEqual` or `toBeCalledWith` instead of a literal value. For example, if you want to check that a mock function is called with a non-null argument: ``` test('map calls its argument with a non-null argument', () => { const mock = jest.fn(); [1].map(x => mock(x)); expect(mock).toBeCalledWith(expect.anything()); }); ``` ### `expect.any(constructor)` `expect.any(constructor)` matches anything that was created with the given constructor or if it's a primitive that is of the passed type. You can use it inside `toEqual` or `toBeCalledWith` instead of a literal value. For example, if you want to check that a mock function is called with a number: ``` class Cat {} function getCat(fn) { return fn(new Cat()); } test('randocall calls its callback with a class instance', () => { const mock = jest.fn(); getCat(mock); expect(mock).toBeCalledWith(expect.any(Cat)); }); function randocall(fn) { return fn(Math.floor(Math.random() * 6 + 1)); } test('randocall calls its callback with a number', () => { const mock = jest.fn(); randocall(mock); expect(mock).toBeCalledWith(expect.any(Number)); }); ``` ### `expect.arrayContaining(array)` `expect.arrayContaining(array)` matches a received array which contains all of the elements in the expected array. That is, the expected array is a **subset** of the received array. Therefore, it matches a received array which contains elements that are **not** in the expected array. You can use it instead of a literal value: * in `toEqual` or `toBeCalledWith` * to match a property in `objectContaining` or `toMatchObject` ``` describe('arrayContaining', () => { const expected = ['Alice', 'Bob']; it('matches even if received contains additional elements', () => { expect(['Alice', 'Bob', 'Eve']).toEqual(expect.arrayContaining(expected)); }); it('does not match if received does not contain expected elements', () => { expect(['Bob', 'Eve']).not.toEqual(expect.arrayContaining(expected)); }); }); ``` ``` describe('Beware of a misunderstanding! A sequence of dice rolls', () => { const expected = [1, 2, 3, 4, 5, 6]; it('matches even with an unexpected number 7', () => { expect([4, 1, 6, 7, 3, 5, 2, 5, 4, 6]).toEqual( expect.arrayContaining(expected), ); }); it('does not match without an expected number 2', () => { expect([4, 1, 6, 7, 3, 5, 7, 5, 4, 6]).not.toEqual( expect.arrayContaining(expected), ); }); }); ``` ### `expect.assertions(number)` `expect.assertions(number)` verifies that a certain number of assertions are called during a test. This is often useful when testing asynchronous code, in order to make sure that assertions in a callback actually got called. For example, let's say that we have a function `doAsync` that receives two callbacks `callback1` and `callback2`, it will asynchronously call both of them in an unknown order. We can test this with: ``` test('doAsync calls both callbacks', () => { expect.assertions(2); function callback1(data) { expect(data).toBeTruthy(); } function callback2(data) { expect(data).toBeTruthy(); } doAsync(callback1, callback2); }); ``` The `expect.assertions(2)` call ensures that both callbacks actually get called. ### `expect.closeTo(number, numDigits?)` `expect.closeTo(number, numDigits?)` is useful when comparing floating point numbers in object properties or array item. If you need to compare a number, please use `.toBeCloseTo` instead. The optional `numDigits` argument limits the number of digits to check **after** the decimal point. For the default value `2`, the test criterion is `Math.abs(expected - received) < 0.005 (that is, 10 ** -2 / 2)`. For example, this test passes with a precision of 5 digits: ``` test('compare float in object properties', () => { expect({ title: '0.1 + 0.2', sum: 0.1 + 0.2, }).toEqual({ title: '0.1 + 0.2', sum: expect.closeTo(0.3, 5), }); }); ``` ### `expect.hasAssertions()` `expect.hasAssertions()` verifies that at least one assertion is called during a test. This is often useful when testing asynchronous code, in order to make sure that assertions in a callback actually got called. For example, let's say that we have a few functions that all deal with state. `prepareState` calls a callback with a state object, `validateState` runs on that state object, and `waitOnState` returns a promise that waits until all `prepareState` callbacks complete. We can test this with: ``` test('prepareState prepares a valid state', () => { expect.hasAssertions(); prepareState(state => { expect(validateState(state)).toBeTruthy(); }); return waitOnState(); }); ``` The `expect.hasAssertions()` call ensures that the `prepareState` callback actually gets called. ### `expect.not.arrayContaining(array)` `expect.not.arrayContaining(array)` matches a received array which does not contain all of the elements in the expected array. That is, the expected array **is not a subset** of the received array. It is the inverse of `expect.arrayContaining`. ``` describe('not.arrayContaining', () => { const expected = ['Samantha']; it('matches if the actual array does not contain the expected elements', () => { expect(['Alice', 'Bob', 'Eve']).toEqual( expect.not.arrayContaining(expected), ); }); }); ``` ### `expect.not.objectContaining(object)` `expect.not.objectContaining(object)` matches any received object that does not recursively match the expected properties. That is, the expected object **is not a subset** of the received object. Therefore, it matches a received object which contains properties that are **not** in the expected object. It is the inverse of `expect.objectContaining`. ``` describe('not.objectContaining', () => { const expected = {foo: 'bar'}; it('matches if the actual object does not contain expected key: value pairs', () => { expect({bar: 'baz'}).toEqual(expect.not.objectContaining(expected)); }); }); ``` ### `expect.not.stringContaining(string)` `expect.not.stringContaining(string)` matches the received value if it is not a string or if it is a string that does not contain the exact expected string. It is the inverse of `expect.stringContaining`. ``` describe('not.stringContaining', () => { const expected = 'Hello world!'; it('matches if the received value does not contain the expected substring', () => { expect('How are you?').toEqual(expect.not.stringContaining(expected)); }); }); ``` ### `expect.not.stringMatching(string | regexp)` `expect.not.stringMatching(string | regexp)` matches the received value if it is not a string or if it is a string that does not match the expected string or regular expression. It is the inverse of `expect.stringMatching`. ``` describe('not.stringMatching', () => { const expected = /Hello world!/; it('matches if the received value does not match the expected regex', () => { expect('How are you?').toEqual(expect.not.stringMatching(expected)); }); }); ``` ### `expect.objectContaining(object)` `expect.objectContaining(object)` matches any received object that recursively matches the expected properties. That is, the expected object is a **subset** of the received object. Therefore, it matches a received object which contains properties that **are present** in the expected object. Instead of literal property values in the expected object, you can use matchers, `expect.anything()`, and so on. For example, let's say that we expect an `onPress` function to be called with an `Event` object, and all we need to verify is that the event has `event.x` and `event.y` properties. We can do that with: ``` test('onPress gets called with the right thing', () => { const onPress = jest.fn(); simulatePresses(onPress); expect(onPress).toBeCalledWith( expect.objectContaining({ x: expect.any(Number), y: expect.any(Number), }), ); }); ``` ### `expect.stringContaining(string)` `expect.stringContaining(string)` matches the received value if it is a string that contains the exact expected string. ### `expect.stringMatching(string | regexp)` `expect.stringMatching(string | regexp)` matches the received value if it is a string that matches the expected string or regular expression. You can use it instead of a literal value: * in `toEqual` or `toBeCalledWith` * to match an element in `arrayContaining` * to match a property in `objectContaining` or `toMatchObject` This example also shows how you can nest multiple asymmetric matchers, with `expect.stringMatching` inside the `expect.arrayContaining`. ``` describe('stringMatching in arrayContaining', () => { const expected = [ expect.stringMatching(/^Alic/), expect.stringMatching(/^[BR]ob/), ]; it('matches even if received contains additional elements', () => { expect(['Alicia', 'Roberto', 'Evelina']).toEqual( expect.arrayContaining(expected), ); }); it('does not match if received does not contain expected elements', () => { expect(['Roberto', 'Evelina']).not.toEqual( expect.arrayContaining(expected), ); }); }); ``` ### `expect.addSnapshotSerializer(serializer)` You can call `expect.addSnapshotSerializer` to add a module that formats application-specific data structures. For an individual test file, an added module precedes any modules from `snapshotSerializers` configuration, which precede the default snapshot serializers for built-in JavaScript types and for React elements. The last module added is the first module tested. ``` import serializer from 'my-serializer-module'; expect.addSnapshotSerializer(serializer); // affects expect(value).toMatchSnapshot() assertions in the test file ``` If you add a snapshot serializer in individual test files instead of adding it to `snapshotSerializers` configuration: * You make the dependency explicit instead of implicit. * You avoid limits to configuration that might cause you to eject from [create-react-app](https://github.com/facebookincubator/create-react-app). See [configuring Jest](configuration#snapshotserializers-arraystring) for more information. ### `.not` If you know how to test something, `.not` lets you test its opposite. For example, this code tests that the best La Croix flavor is not coconut: ``` test('the best flavor is not coconut', () => { expect(bestLaCroixFlavor()).not.toBe('coconut'); }); ``` ### `.resolves` Use `resolves` to unwrap the value of a fulfilled promise so any other matcher can be chained. If the promise is rejected the assertion fails. For example, this code tests that the promise resolves and that the resulting value is `'lemon'`: ``` test('resolves to lemon', () => { // make sure to add a return statement return expect(Promise.resolve('lemon')).resolves.toBe('lemon'); }); ``` Note that, since you are still testing promises, the test is still asynchronous. Hence, you will need to [tell Jest to wait](asynchronous#promises) by returning the unwrapped assertion. Alternatively, you can use `async/await` in combination with `.resolves`: ``` test('resolves to lemon', async () => { await expect(Promise.resolve('lemon')).resolves.toBe('lemon'); await expect(Promise.resolve('lemon')).resolves.not.toBe('octopus'); }); ``` ### `.rejects` Use `.rejects` to unwrap the reason of a rejected promise so any other matcher can be chained. If the promise is fulfilled the assertion fails. For example, this code tests that the promise rejects with reason `'octopus'`: ``` test('rejects to octopus', () => { // make sure to add a return statement return expect(Promise.reject(new Error('octopus'))).rejects.toThrow( 'octopus', ); }); ``` Note that, since you are still testing promises, the test is still asynchronous. Hence, you will need to [tell Jest to wait](asynchronous#promises) by returning the unwrapped assertion. Alternatively, you can use `async/await` in combination with `.rejects`. ``` test('rejects to octopus', async () => { await expect(Promise.reject(new Error('octopus'))).rejects.toThrow('octopus'); }); ``` ### `.toBe(value)` Use `.toBe` to compare primitive values or to check referential identity of object instances. It calls `Object.is` to compare values, which is even better for testing than `===` strict equality operator. For example, this code will validate some properties of the `can` object: ``` const can = { name: 'pamplemousse', ounces: 12, }; describe('the can', () => { test('has 12 ounces', () => { expect(can.ounces).toBe(12); }); test('has a sophisticated name', () => { expect(can.name).toBe('pamplemousse'); }); }); ``` Don't use `.toBe` with floating-point numbers. For example, due to rounding, in JavaScript `0.2 + 0.1` is not strictly equal to `0.3`. If you have floating point numbers, try `.toBeCloseTo` instead. Although the `.toBe` matcher **checks** referential identity, it **reports** a deep comparison of values if the assertion fails. If differences between properties do not help you to understand why a test fails, especially if the report is large, then you might move the comparison into the `expect` function. For example, to assert whether or not elements are the same instance: * rewrite `expect(received).toBe(expected)` as `expect(Object.is(received, expected)).toBe(true)` * rewrite `expect(received).not.toBe(expected)` as `expect(Object.is(received, expected)).toBe(false)` ### `.toHaveBeenCalled()` Also under the alias: `.toBeCalled()` Use `.toHaveBeenCalledWith` to ensure that a mock function was called with specific arguments. The arguments are checked with the same algorithm that `.toEqual` uses. For example, let's say you have a `drinkAll(drink, flavour)` function that takes a `drink` function and applies it to all available beverages. You might want to check that `drink` gets called for `'lemon'`, but not for `'octopus'`, because `'octopus'` flavour is really weird and why would anything be octopus-flavoured? You can do that with this test suite: ``` function drinkAll(callback, flavour) { if (flavour !== 'octopus') { callback(flavour); } } describe('drinkAll', () => { test('drinks something lemon-flavoured', () => { const drink = jest.fn(); drinkAll(drink, 'lemon'); expect(drink).toHaveBeenCalled(); }); test('does not drink something octopus-flavoured', () => { const drink = jest.fn(); drinkAll(drink, 'octopus'); expect(drink).not.toHaveBeenCalled(); }); }); ``` ### `.toHaveBeenCalledTimes(number)` Also under the alias: `.toBeCalledTimes(number)` Use `.toHaveBeenCalledTimes` to ensure that a mock function got called exact number of times. For example, let's say you have a `drinkEach(drink, Array<flavor>)` function that takes a `drink` function and applies it to array of passed beverages. You might want to check that drink function was called exact number of times. You can do that with this test suite: ``` test('drinkEach drinks each drink', () => { const drink = jest.fn(); drinkEach(drink, ['lemon', 'octopus']); expect(drink).toHaveBeenCalledTimes(2); }); ``` ### `.toHaveBeenCalledWith(arg1, arg2, ...)` Also under the alias: `.toBeCalledWith()` Use `.toHaveBeenCalledWith` to ensure that a mock function was called with specific arguments. The arguments are checked with the same algorithm that `.toEqual` uses. For example, let's say that you can register a beverage with a `register` function, and `applyToAll(f)` should apply the function `f` to all registered beverages. To make sure this works, you could write: ``` test('registration applies correctly to orange La Croix', () => { const beverage = new LaCroix('orange'); register(beverage); const f = jest.fn(); applyToAll(f); expect(f).toHaveBeenCalledWith(beverage); }); ``` ### `.toHaveBeenLastCalledWith(arg1, arg2, ...)` Also under the alias: `.lastCalledWith(arg1, arg2, ...)` If you have a mock function, you can use `.toHaveBeenLastCalledWith` to test what arguments it was last called with. For example, let's say you have a `applyToAllFlavors(f)` function that applies `f` to a bunch of flavors, and you want to ensure that when you call it, the last flavor it operates on is `'mango'`. You can write: ``` test('applying to all flavors does mango last', () => { const drink = jest.fn(); applyToAllFlavors(drink); expect(drink).toHaveBeenLastCalledWith('mango'); }); ``` ### `.toHaveBeenNthCalledWith(nthCall, arg1, arg2, ....)` Also under the alias: `.nthCalledWith(nthCall, arg1, arg2, ...)` If you have a mock function, you can use `.toHaveBeenNthCalledWith` to test what arguments it was nth called with. For example, let's say you have a `drinkEach(drink, Array<flavor>)` function that applies `f` to a bunch of flavors, and you want to ensure that when you call it, the first flavor it operates on is `'lemon'` and the second one is `'octopus'`. You can write: ``` test('drinkEach drinks each drink', () => { const drink = jest.fn(); drinkEach(drink, ['lemon', 'octopus']); expect(drink).toHaveBeenNthCalledWith(1, 'lemon'); expect(drink).toHaveBeenNthCalledWith(2, 'octopus'); }); ``` note The nth argument must be positive integer starting from 1. ### `.toHaveReturned()` Also under the alias: `.toReturn()` If you have a mock function, you can use `.toHaveReturned` to test that the mock function successfully returned (i.e., did not throw an error) at least one time. For example, let's say you have a mock `drink` that returns `true`. You can write: ``` test('drinks returns', () => { const drink = jest.fn(() => true); drink(); expect(drink).toHaveReturned(); }); ``` ### `.toHaveReturnedTimes(number)` Also under the alias: `.toReturnTimes(number)` Use `.toHaveReturnedTimes` to ensure that a mock function returned successfully (i.e., did not throw an error) an exact number of times. Any calls to the mock function that throw an error are not counted toward the number of times the function returned. For example, let's say you have a mock `drink` that returns `true`. You can write: ``` test('drink returns twice', () => { const drink = jest.fn(() => true); drink(); drink(); expect(drink).toHaveReturnedTimes(2); }); ``` ### `.toHaveReturnedWith(value)` Also under the alias: `.toReturnWith(value)` Use `.toHaveReturnedWith` to ensure that a mock function returned a specific value. For example, let's say you have a mock `drink` that returns the name of the beverage that was consumed. You can write: ``` test('drink returns La Croix', () => { const beverage = {name: 'La Croix'}; const drink = jest.fn(beverage => beverage.name); drink(beverage); expect(drink).toHaveReturnedWith('La Croix'); }); ``` ### `.toHaveLastReturnedWith(value)` Also under the alias: `.lastReturnedWith(value)` Use `.toHaveLastReturnedWith` to test the specific value that a mock function last returned. If the last call to the mock function threw an error, then this matcher will fail no matter what value you provided as the expected return value. For example, let's say you have a mock `drink` that returns the name of the beverage that was consumed. You can write: ``` test('drink returns La Croix (Orange) last', () => { const beverage1 = {name: 'La Croix (Lemon)'}; const beverage2 = {name: 'La Croix (Orange)'}; const drink = jest.fn(beverage => beverage.name); drink(beverage1); drink(beverage2); expect(drink).toHaveLastReturnedWith('La Croix (Orange)'); }); ``` ### `.toHaveNthReturnedWith(nthCall, value)` Also under the alias: `.nthReturnedWith(nthCall, value)` Use `.toHaveNthReturnedWith` to test the specific value that a mock function returned for the nth call. If the nth call to the mock function threw an error, then this matcher will fail no matter what value you provided as the expected return value. For example, let's say you have a mock `drink` that returns the name of the beverage that was consumed. You can write: ``` test('drink returns expected nth calls', () => { const beverage1 = {name: 'La Croix (Lemon)'}; const beverage2 = {name: 'La Croix (Orange)'}; const drink = jest.fn(beverage => beverage.name); drink(beverage1); drink(beverage2); expect(drink).toHaveNthReturnedWith(1, 'La Croix (Lemon)'); expect(drink).toHaveNthReturnedWith(2, 'La Croix (Orange)'); }); ``` note The nth argument must be positive integer starting from 1. ### `.toHaveLength(number)` Use `.toHaveLength` to check that an object has a `.length` property and it is set to a certain numeric value. This is especially useful for checking arrays or strings size. ``` expect([1, 2, 3]).toHaveLength(3); expect('abc').toHaveLength(3); expect('').not.toHaveLength(5); ``` ### `.toHaveProperty(keyPath, value?)` Use `.toHaveProperty` to check if property at provided reference `keyPath` exists for an object. For checking deeply nested properties in an object you may use [dot notation](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Operators/Property_accessors) or an array containing the keyPath for deep references. You can provide an optional `value` argument to compare the received property value (recursively for all properties of object instances, also known as deep equality, like the `toEqual` matcher). The following example contains a `houseForSale` object with nested properties. We are using `toHaveProperty` to check for the existence and values of various properties in the object. ``` // Object containing house features to be tested const houseForSale = { bath: true, bedrooms: 4, kitchen: { amenities: ['oven', 'stove', 'washer'], area: 20, wallColor: 'white', 'nice.oven': true, }, livingroom: { amenities: [ { couch: [ ['large', {dimensions: [20, 20]}], ['small', {dimensions: [10, 10]}], ], }, ], }, 'ceiling.height': 2, }; test('this house has my desired features', () => { // Example Referencing expect(houseForSale).toHaveProperty('bath'); expect(houseForSale).toHaveProperty('bedrooms', 4); expect(houseForSale).not.toHaveProperty('pool'); // Deep referencing using dot notation expect(houseForSale).toHaveProperty('kitchen.area', 20); expect(houseForSale).toHaveProperty('kitchen.amenities', [ 'oven', 'stove', 'washer', ]); expect(houseForSale).not.toHaveProperty('kitchen.open'); // Deep referencing using an array containing the keyPath expect(houseForSale).toHaveProperty(['kitchen', 'area'], 20); expect(houseForSale).toHaveProperty( ['kitchen', 'amenities'], ['oven', 'stove', 'washer'], ); expect(houseForSale).toHaveProperty(['kitchen', 'amenities', 0], 'oven'); expect(houseForSale).toHaveProperty( 'livingroom.amenities[0].couch[0][1].dimensions[0]', 20, ); expect(houseForSale).toHaveProperty(['kitchen', 'nice.oven']); expect(houseForSale).not.toHaveProperty(['kitchen', 'open']); // Referencing keys with dot in the key itself expect(houseForSale).toHaveProperty(['ceiling.height'], 'tall'); }); ``` ### `.toBeCloseTo(number, numDigits?)` Use `toBeCloseTo` to compare floating point numbers for approximate equality. The optional `numDigits` argument limits the number of digits to check **after** the decimal point. For the default value `2`, the test criterion is `Math.abs(expected - received) < 0.005` (that is, `10 ** -2 / 2`). Intuitive equality comparisons often fail, because arithmetic on decimal (base 10) values often have rounding errors in limited precision binary (base 2) representation. For example, this test fails: ``` test('adding works sanely with decimals', () => { expect(0.2 + 0.1).toBe(0.3); // Fails! }); ``` It fails because in JavaScript, `0.2 + 0.1` is actually `0.30000000000000004`. For example, this test passes with a precision of 5 digits: ``` test('adding works sanely with decimals', () => { expect(0.2 + 0.1).toBeCloseTo(0.3, 5); }); ``` Because floating point errors are the problem that `toBeCloseTo` solves, it does not support big integer values. ### `.toBeDefined()` Use `.toBeDefined` to check that a variable is not undefined. For example, if you want to check that a function `fetchNewFlavorIdea()` returns *something*, you can write: ``` test('there is a new flavor idea', () => { expect(fetchNewFlavorIdea()).toBeDefined(); }); ``` You could write `expect(fetchNewFlavorIdea()).not.toBe(undefined)`, but it's better practice to avoid referring to `undefined` directly in your code. ### `.toBeFalsy()` Use `.toBeFalsy` when you don't care what a value is and you want to ensure a value is false in a boolean context. For example, let's say you have some application code that looks like: ``` drinkSomeLaCroix(); if (!getErrors()) { drinkMoreLaCroix(); } ``` You may not care what `getErrors` returns, specifically - it might return `false`, `null`, or `0`, and your code would still work. So if you want to test there are no errors after drinking some La Croix, you could write: ``` test('drinking La Croix does not lead to errors', () => { drinkSomeLaCroix(); expect(getErrors()).toBeFalsy(); }); ``` In JavaScript, there are six falsy values: `false`, `0`, `''`, `null`, `undefined`, and `NaN`. Everything else is truthy. ### `.toBeGreaterThan(number | bigint)` Use `toBeGreaterThan` to compare `received > expected` for number or big integer values. For example, test that `ouncesPerCan()` returns a value of more than 10 ounces: ``` test('ounces per can is more than 10', () => { expect(ouncesPerCan()).toBeGreaterThan(10); }); ``` ### `.toBeGreaterThanOrEqual(number | bigint)` Use `toBeGreaterThanOrEqual` to compare `received >= expected` for number or big integer values. For example, test that `ouncesPerCan()` returns a value of at least 12 ounces: ``` test('ounces per can is at least 12', () => { expect(ouncesPerCan()).toBeGreaterThanOrEqual(12); }); ``` ### `.toBeLessThan(number | bigint)` Use `toBeLessThan` to compare `received < expected` for number or big integer values. For example, test that `ouncesPerCan()` returns a value of less than 20 ounces: ``` test('ounces per can is less than 20', () => { expect(ouncesPerCan()).toBeLessThan(20); }); ``` ### `.toBeLessThanOrEqual(number | bigint)` Use `toBeLessThanOrEqual` to compare `received <= expected` for number or big integer values. For example, test that `ouncesPerCan()` returns a value of at most 12 ounces: ``` test('ounces per can is at most 12', () => { expect(ouncesPerCan()).toBeLessThanOrEqual(12); }); ``` ### `.toBeInstanceOf(Class)` Use `.toBeInstanceOf(Class)` to check that an object is an instance of a class. This matcher uses `instanceof` underneath. ``` class A {} expect(new A()).toBeInstanceOf(A); expect(() => {}).toBeInstanceOf(Function); expect(new A()).toBeInstanceOf(Function); // throws ``` ### `.toBeNull()` `.toBeNull()` is the same as `.toBe(null)` but the error messages are a bit nicer. So use `.toBeNull()` when you want to check that something is null. ``` function bloop() { return null; } test('bloop returns null', () => { expect(bloop()).toBeNull(); }); ``` ### `.toBeTruthy()` Use `.toBeTruthy` when you don't care what a value is and you want to ensure a value is true in a boolean context. For example, let's say you have some application code that looks like: ``` drinkSomeLaCroix(); if (thirstInfo()) { drinkMoreLaCroix(); } ``` You may not care what `thirstInfo` returns, specifically - it might return `true` or a complex object, and your code would still work. So if you want to test that `thirstInfo` will be truthy after drinking some La Croix, you could write: ``` test('drinking La Croix leads to having thirst info', () => { drinkSomeLaCroix(); expect(thirstInfo()).toBeTruthy(); }); ``` In JavaScript, there are six falsy values: `false`, `0`, `''`, `null`, `undefined`, and `NaN`. Everything else is truthy. ### `.toBeUndefined()` Use `.toBeUndefined` to check that a variable is undefined. For example, if you want to check that a function `bestDrinkForFlavor(flavor)` returns `undefined` for the `'octopus'` flavor, because there is no good octopus-flavored drink: ``` test('the best drink for octopus flavor is undefined', () => { expect(bestDrinkForFlavor('octopus')).toBeUndefined(); }); ``` You could write `expect(bestDrinkForFlavor('octopus')).toBe(undefined)`, but it's better practice to avoid referring to `undefined` directly in your code. ### `.toBeNaN()` Use `.toBeNaN` when checking a value is `NaN`. ``` test('passes when value is NaN', () => { expect(NaN).toBeNaN(); expect(1).not.toBeNaN(); }); ``` ### `.toContain(item)` Use `.toContain` when you want to check that an item is in an array. For testing the items in the array, this uses `===`, a strict equality check. `.toContain` can also check whether a string is a substring of another string. For example, if `getAllFlavors()` returns an array of flavors and you want to be sure that `lime` is in there, you can write: ``` test('the flavor list contains lime', () => { expect(getAllFlavors()).toContain('lime'); }); ``` This matcher also accepts others iterables such as strings, sets, node lists and HTML collections. ### `.toContainEqual(item)` Use `.toContainEqual` when you want to check that an item with a specific structure and values is contained in an array. For testing the items in the array, this matcher recursively checks the equality of all fields, rather than checking for object identity. ``` describe('my beverage', () => { test('is delicious and not sour', () => { const myBeverage = {delicious: true, sour: false}; expect(myBeverages()).toContainEqual(myBeverage); }); }); ``` ### `.toEqual(value)` Use `.toEqual` to compare recursively all properties of object instances (also known as "deep" equality). It calls `Object.is` to compare primitive values, which is even better for testing than `===` strict equality operator. For example, `.toEqual` and `.toBe` behave differently in this test suite, so all the tests pass: ``` const can1 = { flavor: 'grapefruit', ounces: 12, }; const can2 = { flavor: 'grapefruit', ounces: 12, }; describe('the La Croix cans on my desk', () => { test('have all the same properties', () => { expect(can1).toEqual(can2); }); test('are not the exact same can', () => { expect(can1).not.toBe(can2); }); }); ``` tip `.toEqual` won't perform a *deep equality* check for two errors. Only the `message` property of an Error is considered for equality. It is recommended to use the `.toThrow` matcher for testing against errors. If differences between properties do not help you to understand why a test fails, especially if the report is large, then you might move the comparison into the `expect` function. For example, use `equals` method of `Buffer` class to assert whether or not buffers contain the same content: * rewrite `expect(received).toEqual(expected)` as `expect(received.equals(expected)).toBe(true)` * rewrite `expect(received).not.toEqual(expected)` as `expect(received.equals(expected)).toBe(false)` ### `.toMatch(regexp | string)` Use `.toMatch` to check that a string matches a regular expression. For example, you might not know what exactly `essayOnTheBestFlavor()` returns, but you know it's a really long string, and the substring `grapefruit` should be in there somewhere. You can test this with: ``` describe('an essay on the best flavor', () => { test('mentions grapefruit', () => { expect(essayOnTheBestFlavor()).toMatch(/grapefruit/); expect(essayOnTheBestFlavor()).toMatch(new RegExp('grapefruit')); }); }); ``` This matcher also accepts a string, which it will try to match: ``` describe('grapefruits are healthy', () => { test('grapefruits are a fruit', () => { expect('grapefruits').toMatch('fruit'); }); }); ``` ### `.toMatchObject(object)` Use `.toMatchObject` to check that a JavaScript object matches a subset of the properties of an object. It will match received objects with properties that are **not** in the expected object. You can also pass an array of objects, in which case the method will return true only if each object in the received array matches (in the `toMatchObject` sense described above) the corresponding object in the expected array. This is useful if you want to check that two arrays match in their number of elements, as opposed to `arrayContaining`, which allows for extra elements in the received array. You can match properties against values or against matchers. ``` const houseForSale = { bath: true, bedrooms: 4, kitchen: { amenities: ['oven', 'stove', 'washer'], area: 20, wallColor: 'white', }, }; const desiredHouse = { bath: true, kitchen: { amenities: ['oven', 'stove', 'washer'], wallColor: expect.stringMatching(/white|yellow/), }, }; test('the house has my desired features', () => { expect(houseForSale).toMatchObject(desiredHouse); }); ``` ``` describe('toMatchObject applied to arrays', () => { test('the number of elements must match exactly', () => { expect([{foo: 'bar'}, {baz: 1}]).toMatchObject([{foo: 'bar'}, {baz: 1}]); }); test('.toMatchObject is called for each elements, so extra object properties are okay', () => { expect([{foo: 'bar'}, {baz: 1, extra: 'quux'}]).toMatchObject([ {foo: 'bar'}, {baz: 1}, ]); }); }); ``` ### `.toMatchSnapshot(propertyMatchers?, hint?)` This ensures that a value matches the most recent snapshot. Check out [the Snapshot Testing guide](snapshot-testing) for more information. You can provide an optional `propertyMatchers` object argument, which has asymmetric matchers as values of a subset of expected properties, **if** the received value will be an **object** instance. It is like `toMatchObject` with flexible criteria for a subset of properties, followed by a snapshot test as exact criteria for the rest of the properties. You can provide an optional `hint` string argument that is appended to the test name. Although Jest always appends a number at the end of a snapshot name, short descriptive hints might be more useful than numbers to differentiate **multiple** snapshots in a **single** `it` or `test` block. Jest sorts snapshots by name in the corresponding `.snap` file. ### `.toMatchInlineSnapshot(propertyMatchers?, inlineSnapshot)` Ensures that a value matches the most recent snapshot. You can provide an optional `propertyMatchers` object argument, which has asymmetric matchers as values of a subset of expected properties, **if** the received value will be an **object** instance. It is like `toMatchObject` with flexible criteria for a subset of properties, followed by a snapshot test as exact criteria for the rest of the properties. Jest adds the `inlineSnapshot` string argument to the matcher in the test file (instead of an external `.snap` file) the first time that the test runs. Check out the section on [Inline Snapshots](snapshot-testing#inline-snapshots) for more info. ### `.toStrictEqual(value)` Use `.toStrictEqual` to test that objects have the same types as well as structure. Differences from `.toEqual`: * Keys with `undefined` properties are checked. e.g. `{a: undefined, b: 2}` does not match `{b: 2}` when using `.toStrictEqual`. * Array sparseness is checked. e.g. `[, 1]` does not match `[undefined, 1]` when using `.toStrictEqual`. * Object types are checked to be equal. e.g. A class instance with fields `a` and `b` will not equal a literal object with fields `a` and `b`. ``` class LaCroix { constructor(flavor) { this.flavor = flavor; } } describe('the La Croix cans on my desk', () => { test('are not semantically the same', () => { expect(new LaCroix('lemon')).toEqual({flavor: 'lemon'}); expect(new LaCroix('lemon')).not.toStrictEqual({flavor: 'lemon'}); }); }); ``` ### `.toThrow(error?)` Also under the alias: `.toThrowError(error?)` Use `.toThrow` to test that a function throws when it is called. For example, if we want to test that `drinkFlavor('octopus')` throws, because octopus flavor is too disgusting to drink, we could write: ``` test('throws on octopus', () => { expect(() => { drinkFlavor('octopus'); }).toThrow(); }); ``` tip You must wrap the code in a function, otherwise the error will not be caught and the assertion will fail. You can provide an optional argument to test that a specific error is thrown: * regular expression: error message **matches** the pattern * string: error message **includes** the substring * error object: error message is **equal to** the message property of the object * error class: error object is **instance of** class For example, let's say that `drinkFlavor` is coded like this: ``` function drinkFlavor(flavor) { if (flavor == 'octopus') { throw new DisgustingFlavorError('yuck, octopus flavor'); } // Do some other stuff } ``` We could test this error gets thrown in several ways: ``` test('throws on octopus', () => { function drinkOctopus() { drinkFlavor('octopus'); } // Test that the error message says "yuck" somewhere: these are equivalent expect(drinkOctopus).toThrowError(/yuck/); expect(drinkOctopus).toThrowError('yuck'); // Test the exact error message expect(drinkOctopus).toThrowError(/^yuck, octopus flavor$/); expect(drinkOctopus).toThrowError(new Error('yuck, octopus flavor')); // Test that we get a DisgustingFlavorError expect(drinkOctopus).toThrowError(DisgustingFlavorError); }); ``` ### `.toThrowErrorMatchingSnapshot(hint?)` Use `.toThrowErrorMatchingSnapshot` to test that a function throws an error matching the most recent snapshot when it is called. You can provide an optional `hint` string argument that is appended to the test name. Although Jest always appends a number at the end of a snapshot name, short descriptive hints might be more useful than numbers to differentiate **multiple** snapshots in a **single** `it` or `test` block. Jest sorts snapshots by name in the corresponding `.snap` file. For example, let's say you have a `drinkFlavor` function that throws whenever the flavor is `'octopus'`, and is coded like this: ``` function drinkFlavor(flavor) { if (flavor == 'octopus') { throw new DisgustingFlavorError('yuck, octopus flavor'); } // Do some other stuff } ``` The test for this function will look this way: ``` test('throws on octopus', () => { function drinkOctopus() { drinkFlavor('octopus'); } expect(drinkOctopus).toThrowErrorMatchingSnapshot(); }); ``` And it will generate the following snapshot: ``` exports[`drinking flavors throws on octopus 1`] = `"yuck, octopus flavor"`; ``` Check out [React Tree Snapshot Testing](https://jestjs.io/blog/2016/07/27/jest-14) for more information on snapshot testing. ### `.toThrowErrorMatchingInlineSnapshot(inlineSnapshot)` Use `.toThrowErrorMatchingInlineSnapshot` to test that a function throws an error matching the most recent snapshot when it is called. Jest adds the `inlineSnapshot` string argument to the matcher in the test file (instead of an external `.snap` file) the first time that the test runs. Check out the section on [Inline Snapshots](snapshot-testing#inline-snapshots) for more info.
programming_docs
jest The Jest Object The Jest Object =============== The `jest` object is automatically in scope within every test file. The methods in the `jest` object help create mocks and let you control Jest's overall behavior. It can also be imported explicitly by via `import {jest} from '@jest/globals'`. Methods ------- * [Mock Modules](#mock-modules) + [`jest.disableAutomock()`](#jestdisableautomock) + [`jest.enableAutomock()`](#jestenableautomock) + [`jest.createMockFromModule(moduleName)`](#jestcreatemockfrommodulemodulename) + [`jest.mock(moduleName, factory, options)`](#jestmockmodulename-factory-options) + [`jest.Mocked<Source>`](#jestmockedsource) + [`jest.mocked(source, options?)`](#jestmockedsource-options) + [`jest.unmock(moduleName)`](#jestunmockmodulename) + [`jest.doMock(moduleName, factory, options)`](#jestdomockmodulename-factory-options) + [`jest.dontMock(moduleName)`](#jestdontmockmodulename) + [`jest.setMock(moduleName, moduleExports)`](#jestsetmockmodulename-moduleexports) + [`jest.requireActual(moduleName)`](#jestrequireactualmodulename) + [`jest.requireMock(moduleName)`](#jestrequiremockmodulename) + [`jest.resetModules()`](#jestresetmodules) + [`jest.isolateModules(fn)`](#jestisolatemodulesfn) * [Mock Functions](#mock-functions) + [`jest.fn(implementation?)`](#jestfnimplementation) + [`jest.isMockFunction(fn)`](#jestismockfunctionfn) + [`jest.spyOn(object, methodName)`](#jestspyonobject-methodname) + [`jest.spyOn(object, methodName, accessType?)`](#jestspyonobject-methodname-accesstype) + [`jest.clearAllMocks()`](#jestclearallmocks) + [`jest.resetAllMocks()`](#jestresetallmocks) + [`jest.restoreAllMocks()`](#jestrestoreallmocks) * [Fake Timers](#fake-timers) + [`jest.useFakeTimers(fakeTimersConfig?)`](#jestusefaketimersfaketimersconfig) + [`jest.useRealTimers()`](#jestuserealtimers) + [`jest.runAllTicks()`](#jestrunallticks) + [`jest.runAllTimers()`](#jestrunalltimers) + [`jest.runAllImmediates()`](#jestrunallimmediates) + [`jest.advanceTimersByTime(msToRun)`](#jestadvancetimersbytimemstorun) + [`jest.runOnlyPendingTimers()`](#jestrunonlypendingtimers) + [`jest.advanceTimersToNextTimer(steps)`](#jestadvancetimerstonexttimersteps) + [`jest.clearAllTimers()`](#jestclearalltimers) + [`jest.getTimerCount()`](#jestgettimercount) + [`jest.setSystemTime(now?: number | Date)`](#jestsetsystemtimenow-number--date) + [`jest.getRealSystemTime()`](#jestgetrealsystemtime) * [Misc](#misc) + [`jest.setTimeout(timeout)`](#jestsettimeouttimeout) + [`jest.retryTimes(numRetries, options)`](#jestretrytimesnumretries-options) Mock Modules ------------ ### `jest.disableAutomock()` Disables automatic mocking in the module loader. > See `automock` section of [configuration](configuration#automock-boolean) for more information > > After this method is called, all `require()`s will return the real versions of each module (rather than a mocked version). Jest configuration: ``` { "automock": true } ``` Example: ``` export default { authorize: () => { return 'token'; }, }; ``` utils.js ``` import utils from '../utils'; jest.disableAutomock(); test('original implementation', () => { // now we have the original implementation, // even if we set the automocking in a jest configuration expect(utils.authorize()).toBe('token'); }); ``` \_\_tests\_\_/disableAutomocking.js This is usually useful when you have a scenario where the number of dependencies you want to mock is far less than the number of dependencies that you don't. For example, if you're writing a test for a module that uses a large number of dependencies that can be reasonably classified as "implementation details" of the module, then you likely do not want to mock them. Examples of dependencies that might be considered "implementation details" are things ranging from language built-ins (e.g. Array.prototype methods) to highly common utility methods (e.g. underscore/lo-dash, array utilities, etc) and entire libraries like React.js. Returns the `jest` object for chaining. *Note: this method was previously called `autoMockOff`. When using `babel-jest`, calls to `disableAutomock` will automatically be hoisted to the top of the code block. Use `autoMockOff` if you want to explicitly avoid this behavior.* ### `jest.enableAutomock()` Enables automatic mocking in the module loader. Returns the `jest` object for chaining. > See `automock` section of [configuration](configuration#automock-boolean) for more information > > Example: ``` export default { authorize: () => { return 'token'; }, isAuthorized: secret => secret === 'wizard', }; ``` utils.js ``` jest.enableAutomock(); import utils from '../utils'; test('original implementation', () => { // now we have the mocked implementation, expect(utils.authorize._isMockFunction).toBeTruthy(); expect(utils.isAuthorized._isMockFunction).toBeTruthy(); }); ``` \_\_tests\_\_/enableAutomocking.js *Note: this method was previously called `autoMockOn`. When using `babel-jest`, calls to `enableAutomock` will automatically be hoisted to the top of the code block. Use `autoMockOn` if you want to explicitly avoid this behavior.* ### `jest.createMockFromModule(moduleName)` ##### renamed in Jest **26.0.0+** Also under the alias: `.genMockFromModule(moduleName)` Given the name of a module, use the automatic mocking system to generate a mocked version of the module for you. This is useful when you want to create a [manual mock](manual-mocks) that extends the automatic mock's behavior. Example: ``` export default { authorize: () => { return 'token'; }, isAuthorized: secret => secret === 'wizard', }; ``` utils.js ``` const utils = jest.createMockFromModule('../utils').default; utils.isAuthorized = jest.fn(secret => secret === 'not wizard'); test('implementation created by jest.createMockFromModule', () => { expect(utils.authorize.mock).toBeTruthy(); expect(utils.isAuthorized('not wizard')).toEqual(true); }); ``` \_\_tests\_\_/createMockFromModule.test.js This is how `createMockFromModule` will mock the following data types: #### `Function` Creates a new [mock function](mock-functions). The new function has no formal parameters and when called will return `undefined`. This functionality also applies to `async` functions. #### `Class` Creates a new class. The interface of the original class is maintained, all of the class member functions and properties will be mocked. #### `Object` Creates a new deeply cloned object. The object keys are maintained and their values are mocked. #### `Array` Creates a new empty array, ignoring the original. #### `Primitives` Creates a new property with the same primitive value as the original property. Example: ``` module.exports = { function: function square(a, b) { return a * b; }, asyncFunction: async function asyncSquare(a, b) { const result = (await a) * b; return result; }, class: new (class Bar { constructor() { this.array = [1, 2, 3]; } foo() {} })(), object: { baz: 'foo', bar: { fiz: 1, buzz: [1, 2, 3], }, }, array: [1, 2, 3], number: 123, string: 'baz', boolean: true, symbol: Symbol.for('a.b.c'), }; ``` example.js ``` const example = jest.createMockFromModule('./example'); test('should run example code', () => { // creates a new mocked function with no formal arguments. expect(example.function.name).toEqual('square'); expect(example.function.length).toEqual(0); // async functions get the same treatment as standard synchronous functions. expect(example.asyncFunction.name).toEqual('asyncSquare'); expect(example.asyncFunction.length).toEqual(0); // creates a new class with the same interface, member functions and properties are mocked. expect(example.class.constructor.name).toEqual('Bar'); expect(example.class.foo.name).toEqual('foo'); expect(example.class.array.length).toEqual(0); // creates a deeply cloned version of the original object. expect(example.object).toEqual({ baz: 'foo', bar: { fiz: 1, buzz: [], }, }); // creates a new empty array, ignoring the original array. expect(example.array.length).toEqual(0); // creates a new property with the same primitive value as the original property. expect(example.number).toEqual(123); expect(example.string).toEqual('baz'); expect(example.boolean).toEqual(true); expect(example.symbol).toEqual(Symbol.for('a.b.c')); }); ``` \_\_tests\_\_/example.test.js ### `jest.mock(moduleName, factory, options)` Mocks a module with an auto-mocked version when it is being required. `factory` and `options` are optional. For example: ``` module.exports = () => 'banana'; ``` banana.js ``` jest.mock('../banana'); const banana = require('../banana'); // banana will be explicitly mocked. banana(); // will return 'undefined' because the function is auto-mocked. ``` \_\_tests\_\_/test.js The second argument can be used to specify an explicit module factory that is being run instead of using Jest's automocking feature: ``` jest.mock('../moduleName', () => { return jest.fn(() => 42); }); // This runs the function specified as second argument to `jest.mock`. const moduleName = require('../moduleName'); moduleName(); // Will return '42'; ``` When using the `factory` parameter for an ES6 module with a default export, the `__esModule: true` property needs to be specified. This property is normally generated by Babel / TypeScript, but here it needs to be set manually. When importing a default export, it's an instruction to import the property named `default` from the export object: ``` import moduleName, {foo} from '../moduleName'; jest.mock('../moduleName', () => { return { __esModule: true, default: jest.fn(() => 42), foo: jest.fn(() => 43), }; }); moduleName(); // Will return 42 foo(); // Will return 43 ``` The third argument can be used to create virtual mocks – mocks of modules that don't exist anywhere in the system: ``` jest.mock( '../moduleName', () => { /* * Custom implementation of a module that doesn't exist in JS, * like a generated module or a native module in react-native. */ }, {virtual: true}, ); ``` > **Warning:** Importing a module in a setup file (as specified by `setupFilesAfterEnv`) will prevent mocking for the module in question, as well as all the modules that it imports. > > Modules that are mocked with `jest.mock` are mocked only for the file that calls `jest.mock`. Another file that imports the module will get the original implementation even if it runs after the test file that mocks the module. Returns the `jest` object for chaining. tip Writing tests in TypeScript? Use [`jest.Mocked`](mock-function-api/index#jestmockedsource) utility type or [`jest.mocked()`](mock-function-api/index#jestmockedsource-options) helper method to have your mocked modules typed. ### `jest.Mocked<Source>` See [TypeScript Usage](mock-function-api/index#jestmockedsource) chapter of Mock Functions page for documentation. ### `jest.mocked(source, options?)` See [TypeScript Usage](mock-function-api/index#jestmockedsource-options) chapter of Mock Functions page for documentation. ### `jest.unmock(moduleName)` Indicates that the module system should never return a mocked version of the specified module from `require()` (e.g. that it should always return the real module). The most common use of this API is for specifying the module a given test intends to be testing (and thus doesn't want automatically mocked). Returns the `jest` object for chaining. ### `jest.doMock(moduleName, factory, options)` When using `babel-jest`, calls to `mock` will automatically be hoisted to the top of the code block. Use this method if you want to explicitly avoid this behavior. One example when this is useful is when you want to mock a module differently within the same file: ``` beforeEach(() => { jest.resetModules(); }); test('moduleName 1', () => { jest.doMock('../moduleName', () => { return jest.fn(() => 1); }); const moduleName = require('../moduleName'); expect(moduleName()).toEqual(1); }); test('moduleName 2', () => { jest.doMock('../moduleName', () => { return jest.fn(() => 2); }); const moduleName = require('../moduleName'); expect(moduleName()).toEqual(2); }); ``` Using `jest.doMock()` with ES6 imports requires additional steps. Follow these if you don't want to use `require` in your tests: * We have to specify the `__esModule: true` property (see the [`jest.mock()`](#jestmockmodulename-factory-options) API for more information). * Static ES6 module imports are hoisted to the top of the file, so instead we have to import them dynamically using `import()`. * Finally, we need an environment which supports dynamic importing. Please see [Using Babel](getting-started#using-babel) for the initial setup. Then add the plugin [babel-plugin-dynamic-import-node](https://www.npmjs.com/package/babel-plugin-dynamic-import-node), or an equivalent, to your Babel config to enable dynamic importing in Node. ``` beforeEach(() => { jest.resetModules(); }); test('moduleName 1', () => { jest.doMock('../moduleName', () => { return { __esModule: true, default: 'default1', foo: 'foo1', }; }); return import('../moduleName').then(moduleName => { expect(moduleName.default).toEqual('default1'); expect(moduleName.foo).toEqual('foo1'); }); }); test('moduleName 2', () => { jest.doMock('../moduleName', () => { return { __esModule: true, default: 'default2', foo: 'foo2', }; }); return import('../moduleName').then(moduleName => { expect(moduleName.default).toEqual('default2'); expect(moduleName.foo).toEqual('foo2'); }); }); ``` Returns the `jest` object for chaining. ### `jest.dontMock(moduleName)` When using `babel-jest`, calls to `unmock` will automatically be hoisted to the top of the code block. Use this method if you want to explicitly avoid this behavior. Returns the `jest` object for chaining. ### `jest.setMock(moduleName, moduleExports)` Explicitly supplies the mock object that the module system should return for the specified module. On occasion, there are times where the automatically generated mock the module system would normally provide you isn't adequate enough for your testing needs. Normally under those circumstances you should write a [manual mock](manual-mocks) that is more adequate for the module in question. However, on extremely rare occasions, even a manual mock isn't suitable for your purposes and you need to build the mock yourself inside your test. In these rare scenarios you can use this API to manually fill the slot in the module system's mock-module registry. Returns the `jest` object for chaining. *Note It is recommended to use [`jest.mock()`](#jestmockmodulename-factory-options) instead. The `jest.mock` API's second argument is a module factory instead of the expected exported module object.* ### `jest.requireActual(moduleName)` Returns the actual module instead of a mock, bypassing all checks on whether the module should receive a mock implementation or not. Example: ``` jest.mock('../myModule', () => { // Require the original module to not be mocked... const originalModule = jest.requireActual('../myModule'); return { __esModule: true, // Use it when dealing with esModules ...originalModule, getRandom: jest.fn().mockReturnValue(10), }; }); const getRandom = require('../myModule').getRandom; getRandom(); // Always returns 10 ``` ### `jest.requireMock(moduleName)` Returns a mock module instead of the actual module, bypassing all checks on whether the module should be required normally or not. ### `jest.resetModules()` Resets the module registry - the cache of all required modules. This is useful to isolate modules where local state might conflict between tests. Example: ``` const sum1 = require('../sum'); jest.resetModules(); const sum2 = require('../sum'); sum1 === sum2; // > false (Both sum modules are separate "instances" of the sum module.) ``` Example in a test: ``` beforeEach(() => { jest.resetModules(); }); test('works', () => { const sum = require('../sum'); }); test('works too', () => { const sum = require('../sum'); // sum is a different copy of the sum module from the previous test. }); ``` Returns the `jest` object for chaining. ### `jest.isolateModules(fn)` `jest.isolateModules(fn)` goes a step further than `jest.resetModules()` and creates a sandbox registry for the modules that are loaded inside the callback function. This is useful to isolate specific modules for every test so that local module state doesn't conflict between tests. ``` let myModule; jest.isolateModules(() => { myModule = require('myModule'); }); const otherCopyOfMyModule = require('myModule'); ``` Mock Functions -------------- ### `jest.fn(implementation?)` Returns a new, unused [mock function](mock-function-api). Optionally takes a mock implementation. ``` const mockFn = jest.fn(); mockFn(); expect(mockFn).toHaveBeenCalled(); // With a mock implementation: const returnsTrue = jest.fn(() => true); console.log(returnsTrue()); // true; ``` tip See [Mock Functions](mock-function-api#jestfnimplementation) page for details on TypeScript usage. ### `jest.isMockFunction(fn)` Determines if the given function is a mocked function. ### `jest.spyOn(object, methodName)` Creates a mock function similar to `jest.fn` but also tracks calls to `object[methodName]`. Returns a Jest [mock function](mock-function-api). note By default, `jest.spyOn` also calls the **spied** method. This is different behavior from most other test libraries. If you want to overwrite the original function, you can use `jest.spyOn(object, methodName).mockImplementation(() => customImplementation)` or `object[methodName] = jest.fn(() => customImplementation);` tip Since `jest.spyOn` is a mock. You could restore the initial state calling [jest.restoreAllMocks](#jestrestoreallmocks) on [afterEach](api#aftereachfn-timeout) method. Example: ``` const video = { play() { return true; }, }; module.exports = video; ``` Example test: ``` const video = require('./video'); afterEach(() => { // restore the spy created with spyOn jest.restoreAllMocks(); }); test('plays video', () => { const spy = jest.spyOn(video, 'play'); const isPlaying = video.play(); expect(spy).toHaveBeenCalled(); expect(isPlaying).toBe(true); }); ``` ### `jest.spyOn(object, methodName, accessType?)` Since Jest 22.1.0+, the `jest.spyOn` method takes an optional third argument of `accessType` that can be either `'get'` or `'set'`, which proves to be useful when you want to spy on a getter or a setter, respectively. Example: ``` const video = { // it's a getter! get play() { return true; }, }; module.exports = video; const audio = { _volume: false, // it's a setter! set volume(value) { this._volume = value; }, get volume() { return this._volume; }, }; module.exports = audio; ``` Example test: ``` const audio = require('./audio'); const video = require('./video'); afterEach(() => { // restore the spy created with spyOn jest.restoreAllMocks(); }); test('plays video', () => { const spy = jest.spyOn(video, 'play', 'get'); // we pass 'get' const isPlaying = video.play; expect(spy).toHaveBeenCalled(); expect(isPlaying).toBe(true); }); test('plays audio', () => { const spy = jest.spyOn(audio, 'volume', 'set'); // we pass 'set' audio.volume = 100; expect(spy).toHaveBeenCalled(); expect(audio.volume).toBe(100); }); ``` ### `jest.clearAllMocks()` Clears the `mock.calls`, `mock.instances`, `mock.contexts` and `mock.results` properties of all mocks. Equivalent to calling [`.mockClear()`](mock-function-api#mockfnmockclear) on every mocked function. Returns the `jest` object for chaining. ### `jest.resetAllMocks()` Resets the state of all mocks. Equivalent to calling [`.mockReset()`](mock-function-api#mockfnmockreset) on every mocked function. Returns the `jest` object for chaining. ### `jest.restoreAllMocks()` Restores all mocks back to their original value. Equivalent to calling [`.mockRestore()`](mock-function-api#mockfnmockrestore) on every mocked function. Beware that `jest.restoreAllMocks()` only works when the mock was created with `jest.spyOn`; other mocks will require you to manually restore them. Fake Timers ----------- ### `jest.useFakeTimers(fakeTimersConfig?)` Instructs Jest to use fake versions of the global date, performance, time and timer APIs. Fake timers implementation is backed by [`@sinonjs/fake-timers`](https://github.com/sinonjs/fake-timers). Fake timers will swap out `Date`, `performance.now()`, `queueMicrotask()`, `setImmediate()`, `clearImmediate()`, `setInterval()`, `clearInterval()`, `setTimeout()`, `clearTimeout()` with an implementation that gets its time from the fake clock. In Node environment `process.hrtime`, `process.nextTick()` and in JSDOM environment `requestAnimationFrame()`, `cancelAnimationFrame()`, `requestIdleCallback()`, `cancelIdleCallback()` will be replaced as well. Configuration options: ``` type FakeableAPI = | 'Date' | 'hrtime' | 'nextTick' | 'performance' | 'queueMicrotask' | 'requestAnimationFrame' | 'cancelAnimationFrame' | 'requestIdleCallback' | 'cancelIdleCallback' | 'setImmediate' | 'clearImmediate' | 'setInterval' | 'clearInterval' | 'setTimeout' | 'clearTimeout'; type FakeTimersConfig = { /** * If set to `true` all timers will be advanced automatically by 20 milliseconds * every 20 milliseconds. A custom time delta may be provided by passing a number. * The default is `false`. */ advanceTimers?: boolean | number; /** * List of names of APIs that should not be faked. The default is `[]`, meaning * all APIs are faked. */ doNotFake?: Array<FakeableAPI>; /** * Use the old fake timers implementation instead of one backed by `@sinonjs/fake-timers`. * The default is `false`. */ legacyFakeTimers?: boolean; /** Sets current system time to be used by fake timers. The default is `Date.now()`. */ now?: number | Date; /** * The maximum number of recursive timers that will be run when calling `jest.runAllTimers()`. * The default is `100_000` timers. */ timerLimit?: number; }; ``` Calling `jest.useFakeTimers()` will use fake timers for all tests within the file, until original timers are restored with `jest.useRealTimers()`. You can call `jest.useFakeTimers()` or `jest.useRealTimers()` from anywhere: top level, inside an `test` block, etc. Keep in mind that this is a **global operation** and will affect other tests within the same file. Calling `jest.useFakeTimers()` once again in the same test file would reset the internal state (e.g. timer count) and reinstall fake timers using the provided options: ``` test('advance the timers automatically', () => { jest.useFakeTimers({advanceTimers: true}); // ... }); test('do not advance the timers and do not fake `performance`', () => { jest.useFakeTimers({doNotFake: ['performance']}); // ... }); test('uninstall fake timers for the rest of tests in the file', () => { jest.useRealTimers(); // ... }); ``` Legacy Fake Timers For some reason you might have to use legacy implementation of fake timers. It can be enabled like this (additional options are not supported): ``` jest.useFakeTimers({ legacyFakeTimers: true, }); ``` Legacy fake timers will swap out `setImmediate()`, `clearImmediate()`, `setInterval()`, `clearInterval()`, `setTimeout()`, `clearTimeout()` with Jest [mock functions](mock-function-api). In Node environment `process.nextTick()` and in JSDOM environment `requestAnimationFrame()`, `cancelAnimationFrame()` will be also replaced. Returns the `jest` object for chaining. ### `jest.useRealTimers()` Instructs Jest to restore the original implementations of the global date, performance, time and timer APIs. For example, you may call `jest.useRealTimers()` inside `afterEach` hook to restore timers after each test: ``` afterEach(() => { jest.useRealTimers(); }); test('do something with fake timers', () => { jest.useFakeTimers(); // ... }); test('do something with real timers', () => { // ... }); ``` Returns the `jest` object for chaining. ### `jest.runAllTicks()` Exhausts the **micro**-task queue (usually interfaced in node via `process.nextTick`). When this API is called, all pending micro-tasks that have been queued via `process.nextTick` will be executed. Additionally, if those micro-tasks themselves schedule new micro-tasks, those will be continually exhausted until there are no more micro-tasks remaining in the queue. ### `jest.runAllTimers()` Exhausts both the **macro**-task queue (i.e., all tasks queued by `setTimeout()`, `setInterval()`, and `setImmediate()`) and the **micro**-task queue (usually interfaced in node via `process.nextTick`). When this API is called, all pending macro-tasks and micro-tasks will be executed. If those tasks themselves schedule new tasks, those will be continually exhausted until there are no more tasks remaining in the queue. This is often useful for synchronously executing setTimeouts during a test in order to synchronously assert about some behavior that would only happen after the `setTimeout()` or `setInterval()` callbacks executed. See the [Timer mocks](timer-mocks) doc for more information. ### `jest.runAllImmediates()` Exhausts all tasks queued by `setImmediate()`. info This function is only available when using legacy fake timers implementation. ### `jest.advanceTimersByTime(msToRun)` Executes only the macro task queue (i.e. all tasks queued by `setTimeout()` or `setInterval()` and `setImmediate()`). When this API is called, all timers are advanced by `msToRun` milliseconds. All pending "macro-tasks" that have been queued via `setTimeout()` or `setInterval()`, and would be executed within this time frame will be executed. Additionally, if those macro-tasks schedule new macro-tasks that would be executed within the same time frame, those will be executed until there are no more macro-tasks remaining in the queue, that should be run within `msToRun` milliseconds. ### `jest.runOnlyPendingTimers()` Executes only the macro-tasks that are currently pending (i.e., only the tasks that have been queued by `setTimeout()` or `setInterval()` up to this point). If any of the currently pending macro-tasks schedule new macro-tasks, those new tasks will not be executed by this call. This is useful for scenarios such as one where the module being tested schedules a `setTimeout()` whose callback schedules another `setTimeout()` recursively (meaning the scheduling never stops). In these scenarios, it's useful to be able to run forward in time by a single step at a time. ### `jest.advanceTimersToNextTimer(steps)` Advances all timers by the needed milliseconds so that only the next timeouts/intervals will run. Optionally, you can provide `steps`, so it will run `steps` amount of next timeouts/intervals. ### `jest.clearAllTimers()` Removes any pending timers from the timer system. This means, if any timers have been scheduled (but have not yet executed), they will be cleared and will never have the opportunity to execute in the future. ### `jest.getTimerCount()` Returns the number of fake timers still left to run. ### `jest.setSystemTime(now?: number | Date)` Set the current system time used by fake timers. Simulates a user changing the system clock while your program is running. It affects the current time but it does not in itself cause e.g. timers to fire; they will fire exactly as they would have done without the call to `jest.setSystemTime()`. info This function is not available when using legacy fake timers implementation. ### `jest.getRealSystemTime()` When mocking time, `Date.now()` will also be mocked. If you for some reason need access to the real current time, you can invoke this function. info This function is not available when using legacy fake timers implementation. Misc ---- ### `jest.setTimeout(timeout)` Set the default timeout interval (in milliseconds) for all tests and before/after hooks in the test file. This only affects the test file from which this function is called. To set timeout intervals on different tests in the same file, use the [`timeout` option on each individual test](api#testname-fn-timeout). *Note: The default timeout interval is 5 seconds if this method is not called.* *Note: If you want to set the timeout for all test files, a good place to do this is in `setupFilesAfterEnv`.* Example: ``` jest.setTimeout(1000); // 1 second ``` ### `jest.retryTimes(numRetries, options)` Runs failed tests n-times until they pass or until the max number of retries is exhausted. `options` are optional. This only works with the default [jest-circus](https://github.com/facebook/jest/tree/main/packages/jest-circus) runner! This must live at the top-level of a test file or in a describe block. Retries *will not* work if `jest.retryTimes()` is called in a `beforeEach` or a `test` block. Example in a test: ``` jest.retryTimes(3); test('will fail', () => { expect(true).toBe(false); }); ``` If `logErrorsBeforeRetry` is enabled, Jest will log the error(s) that caused the test to fail to the console, providing visibility on why a retry occurred. ``` jest.retryTimes(3, {logErrorsBeforeRetry: true}); test('will fail', () => { expect(true).toBe(false); }); ``` Returns the `jest` object for chaining.
programming_docs
jest Architecture Architecture ============ If you are interested in learning more about how Jest works, understand its architecture, and how Jest is split up into individual reusable packages, check out this video: If you'd like to learn how to build a testing framework like Jest from scratch, check out this video: There is also a [written guide you can follow](https://cpojer.net/posts/building-a-javascript-testing-framework). It teaches the fundamental concepts of Jest and explains how various parts of Jest can be used to compose a custom testing framework. jest DOM Manipulation DOM Manipulation ================ Another class of functions that is often considered difficult to test is code that directly manipulates the DOM. Let's see how we can test the following snippet of jQuery code that listens to a click event, fetches some data asynchronously and sets the content of a span. ``` 'use strict'; const $ = require('jquery'); const fetchCurrentUser = require('./fetchCurrentUser.js'); $('#button').click(() => { fetchCurrentUser(user => { const loggedText = 'Logged ' + (user.loggedIn ? 'In' : 'Out'); $('#username').text(user.fullName + ' - ' + loggedText); }); }); ``` displayUser.js Again, we create a test file in the `__tests__/` folder: ``` 'use strict'; jest.mock('../fetchCurrentUser'); test('displays a user after a click', () => { // Set up our document body document.body.innerHTML = '<div>' + ' <span id="username" />' + ' <button id="button" />' + '</div>'; // This module has a side-effect require('../displayUser'); const $ = require('jquery'); const fetchCurrentUser = require('../fetchCurrentUser'); // Tell the fetchCurrentUser mock function to automatically invoke // its callback with some data fetchCurrentUser.mockImplementation(cb => { cb({ fullName: 'Johnny Cash', loggedIn: true, }); }); // Use jquery to emulate a click on our button $('#button').click(); // Assert that the fetchCurrentUser function was called, and that the // #username span's inner text was updated as we'd expect it to. expect(fetchCurrentUser).toBeCalled(); expect($('#username').text()).toEqual('Johnny Cash - Logged In'); }); ``` \_\_tests\_\_/displayUser-test.js We are mocking `fetchCurrentUser.js` so that our test doesn't make a real network request but instead resolves to mock data locally. This ensures that our test can complete in milliseconds rather than seconds and guarantees a fast unit test iteration speed. Also, the function being tested adds an event listener on the `#button` DOM element, so we need to set up our DOM correctly for the test. `jsdom` and the `jest-environment-jsdom` package simulate a DOM environment as if you were in the browser. This means that every DOM API that we call can be observed in the same way it would be observed in a browser! To get started with the JSDOM [test environment](configuration#testenvironment-string), the `jest-environment-jsdom` package must be installed if it's not already: * npm * Yarn ``` npm install --save-dev jest-environment-jsdom ``` ``` yarn add --dev jest-environment-jsdom ``` The code for this example is available at [examples/jquery](https://github.com/facebook/jest/tree/main/examples/jquery). jest Jest CLI Options Jest CLI Options ================ The `jest` command line runner has a number of useful options. You can run `jest --help` to view all available options. Many of the options shown below can also be used together to run tests exactly the way you want. Every one of Jest's [Configuration](configuration) options can also be specified through the CLI. Here is a brief overview: Running from the command line ----------------------------- Run all tests (default): ``` jest ``` Run only the tests that were specified with a pattern or filename: ``` jest my-test #or jest path/to/my-test.js ``` Run tests related to changed files based on hg/git (uncommitted files): ``` jest -o ``` Run tests related to `path/to/fileA.js` and `path/to/fileB.js`: ``` jest --findRelatedTests path/to/fileA.js path/to/fileB.js ``` Run tests that match this spec name (match against the name in `describe` or `test`, basically). ``` jest -t name-of-spec ``` Run watch mode: ``` jest --watch #runs jest -o by default jest --watchAll #runs all tests ``` Watch mode also enables to specify the name or path to a file to focus on a specific set of tests. Using with yarn --------------- If you run Jest via `yarn test`, you can pass the command line arguments directly as Jest arguments. Instead of: ``` jest -u -t="ColorPicker" ``` you can use: ``` yarn test -u -t="ColorPicker" ``` Using with npm scripts ---------------------- If you run Jest via `npm test`, you can still use the command line arguments by inserting a `--` between `npm test` and the Jest arguments. Instead of: ``` jest -u -t="ColorPicker" ``` you can use: ``` npm test -- -u -t="ColorPicker" ``` Camelcase & dashed args support ------------------------------- Jest supports both camelcase and dashed arg formats. The following examples will have an equal result: ``` jest --collect-coverage jest --collectCoverage ``` Arguments can also be mixed: ``` jest --update-snapshot --detectOpenHandles ``` Options ------- note CLI options take precedence over values from the [Configuration](configuration). * [Using with npm scripts](#using-with-npm-scripts) * [Camelcase & dashed args support](#camelcase--dashed-args-support) * [Options](#options) * [Reference](#reference) + [`jest <regexForTestFiles>`](#jest-regexfortestfiles) + [`--bail[=<n>]`](#--bailn) + [`--cache`](#--cache) + [`--changedFilesWithAncestor`](#--changedfileswithancestor) + [`--changedSince`](#--changedsince) + [`--ci`](#--ci) + [`--clearCache`](#--clearcache) + [`--clearMocks`](#--clearmocks) + [`--collectCoverageFrom=<glob>`](#--collectcoveragefromglob) + [`--colors`](#--colors) + [`--config=<path>`](#--configpath) + [`--coverage[=<boolean>]`](#--coverageboolean) + [`--coverageProvider=<provider>`](#--coverageproviderprovider) + [`--debug`](#--debug) + [`--detectOpenHandles`](#--detectopenhandles) + [`--env=<environment>`](#--envenvironment) + [`--errorOnDeprecated`](#--errorondeprecated) + [`--expand`](#--expand) + [`--filter=<file>`](#--filterfile) + [`--findRelatedTests <spaceSeparatedListOfSourceFiles>`](#--findrelatedtests-spaceseparatedlistofsourcefiles) + [`--forceExit`](#--forceexit) + [`--help`](#--help) + [`--ignoreProjects <project1> ... <projectN>`](#--ignoreprojects-project1--projectn) + [`--init`](#--init) + [`--injectGlobals`](#--injectglobals) + [`--json`](#--json) + [`--lastCommit`](#--lastcommit) + [`--listTests`](#--listtests) + [`--logHeapUsage`](#--logheapusage) + [`--maxConcurrency=<num>`](#--maxconcurrencynum) + [`--maxWorkers=<num>|<string>`](#--maxworkersnumstring) + [`--noStackTrace`](#--nostacktrace) + [`--notify`](#--notify) + [`--onlyChanged`](#--onlychanged) + [`--outputFile=<filename>`](#--outputfilefilename) + [`--passWithNoTests`](#--passwithnotests) + [`--projects <path1> ... <pathN>`](#--projects-path1--pathn) + [`--reporters`](#--reporters) + [`--resetMocks`](#--resetmocks) + [`--restoreMocks`](#--restoremocks) + [`--roots`](#--roots) + [`--runInBand`](#--runinband) + [`--runTestsByPath`](#--runtestsbypath) + [`--selectProjects <project1> ... <projectN>`](#--selectprojects-project1--projectn) + [`--setupFilesAfterEnv <path1> ... <pathN>`](#--setupfilesafterenv-path1--pathn) + [`--shard`](#--shard) + [`--showConfig`](#--showconfig) + [`--silent`](#--silent) + [`--testEnvironmentOptions=<json string>`](#--testenvironmentoptionsjson-string) + [`--testLocationInResults`](#--testlocationinresults) + [`--testMatch glob1 ... globN`](#--testmatch-glob1--globn) + [`--testNamePattern=<regex>`](#--testnamepatternregex) + [`--testPathIgnorePatterns=<regex>|[array]`](#--testpathignorepatternsregexarray) + [`--testPathPattern=<regex>`](#--testpathpatternregex) + [`--testRunner=<path>`](#--testrunnerpath) + [`--testSequencer=<path>`](#--testsequencerpath) + [`--testTimeout=<number>`](#--testtimeoutnumber) + [`--updateSnapshot`](#--updatesnapshot) + [`--useStderr`](#--usestderr) + [`--verbose`](#--verbose) + [`--version`](#--version) + [`--watch`](#--watch) + [`--watchAll`](#--watchall) + [`--watchman`](#--watchman) Reference --------- ### `jest <regexForTestFiles>` When you run `jest` with an argument, that argument is treated as a regular expression to match against files in your project. It is possible to run test suites by providing a pattern. Only the files that the pattern matches will be picked up and executed. Depending on your terminal, you may need to quote this argument: `jest "my.*(complex)?pattern"`. On Windows, you will need to use `/` as a path separator or escape `\` as `\\`. ### `--bail[=<n>]` Alias: `-b`. Exit the test suite immediately upon `n` number of failing test suite. Defaults to `1`. ### `--cache` Whether to use the cache. Defaults to true. Disable the cache using `--no-cache`. caution The cache should only be disabled if you are experiencing caching related problems. On average, disabling the cache makes Jest at least two times slower. If you want to inspect the cache, use `--showConfig` and look at the `cacheDirectory` value. If you need to clear the cache, use `--clearCache`. ### `--changedFilesWithAncestor` Runs tests related to the current changes and the changes made in the last commit. Behaves similarly to `--onlyChanged`. ### `--changedSince` Runs tests related to the changes since the provided branch or commit hash. If the current branch has diverged from the given branch, then only changes made locally will be tested. Behaves similarly to `--onlyChanged`. ### `--ci` When this option is provided, Jest will assume it is running in a CI environment. This changes the behavior when a new snapshot is encountered. Instead of the regular behavior of storing a new snapshot automatically, it will fail the test and require Jest to be run with `--updateSnapshot`. ### `--clearCache` Deletes the Jest cache directory and then exits without running tests. Will delete `cacheDirectory` if the option is passed, or Jest's default cache directory. The default cache directory can be found by calling `jest --showConfig`. caution Clearing the cache will reduce performance. ### `--clearMocks` Automatically clear mock calls, instances, contexts and results before every test. Equivalent to calling [`jest.clearAllMocks()`](jest-object#jestclearallmocks) before each test. This does not remove any mock implementation that may have been provided. ### `--collectCoverageFrom=<glob>` A glob pattern relative to `rootDir` matching the files that coverage info needs to be collected from. ### `--colors` Forces test results output highlighting even if stdout is not a TTY. ### `--config=<path>` Alias: `-c`. The path to a Jest config file specifying how to find and execute tests. If no `rootDir` is set in the config, the directory containing the config file is assumed to be the `rootDir` for the project. This can also be a JSON-encoded value which Jest will use as configuration. ### `--coverage[=<boolean>]` Alias: `--collectCoverage`. Indicates that test coverage information should be collected and reported in the output. Optionally pass `<boolean>` to override option set in configuration. ### `--coverageProvider=<provider>` Indicates which provider should be used to instrument code for coverage. Allowed values are `babel` (default) or `v8`. Note that using `v8` is considered experimental. This uses V8's builtin code coverage rather than one based on Babel. It is not as well tested, and it has also improved in the last few releases of Node. Using the latest versions of node (v14 at the time of this writing) will yield better results. ### `--debug` Print debugging info about your Jest config. ### `--detectOpenHandles` Attempt to collect and print open handles preventing Jest from exiting cleanly. Use this in cases where you need to use `--forceExit` in order for Jest to exit to potentially track down the reason. This implies `--runInBand`, making tests run serially. Implemented using [`async_hooks`](https://nodejs.org/api/async_hooks.html). This option has a significant performance penalty and should only be used for debugging. ### `--env=<environment>` The test environment used for all tests. This can point to any file or node module. Examples: `jsdom`, `node` or `path/to/my-environment.js`. ### `--errorOnDeprecated` Make calling deprecated APIs throw helpful error messages. Useful for easing the upgrade process. ### `--expand` Alias: `-e`. Use this flag to show full diffs and errors instead of a patch. ### `--filter=<file>` Path to a module exporting a filtering function. This asynchronous function receives a list of test paths which can be manipulated to exclude tests from running by returning an object with the "filtered" property. Especially useful when used in conjunction with a testing infrastructure to filter known broken, e.g. ``` module.exports = testPaths => { const allowedPaths = testPaths.filter(filteringFunction); // ["path1.spec.js", "path2.spec.js", etc] return { filtered: allowedPaths, }; }; ``` my-filter.js ### `--findRelatedTests <spaceSeparatedListOfSourceFiles>` Find and run the tests that cover a space separated list of source files that were passed in as arguments. Useful for pre-commit hook integration to run the minimal amount of tests necessary. Can be used together with `--coverage` to include a test coverage for the source files, no duplicate `--collectCoverageFrom` arguments needed. ### `--forceExit` Force Jest to exit after all tests have completed running. This is useful when resources set up by test code cannot be adequately cleaned up. caution This feature is an escape-hatch. If Jest doesn't exit at the end of a test run, it means external resources are still being held on to or timers are still pending in your code. It is advised to tear down external resources after each test to make sure Jest can shut down cleanly. You can use `--detectOpenHandles` to help track it down. ### `--help` Show the help information, similar to this page. ### `--ignoreProjects <project1> ... <projectN>` Ignore the tests of the specified projects. Jest uses the attribute `displayName` in the configuration to identify each project. If you use this option, you should provide a `displayName` to all your projects. ### `--init` Generate a basic configuration file. Based on your project, Jest will ask you a few questions that will help to generate a `jest.config.js` file with a short description for each option. ### `--injectGlobals` Insert Jest's globals (`expect`, `test`, `describe`, `beforeEach` etc.) into the global environment. If you set this to `false`, you should import from `@jest/globals`, e.g. ``` import {expect, jest, test} from '@jest/globals'; jest.useFakeTimers(); test('some test', () => { expect(Date.now()).toBe(0); }); ``` note This option is only supported using the default `jest-circus` test runner. ### `--json` Prints the test results in JSON. This mode will send all other test output and user messages to stderr. ### `--lastCommit` Run all tests affected by file changes in the last commit made. Behaves similarly to `--onlyChanged`. ### `--listTests` Lists all test files that Jest will run given the arguments, and exits. ### `--logHeapUsage` Logs the heap usage after every test. Useful to debug memory leaks. Use together with `--runInBand` and `--expose-gc` in node. ### `--maxConcurrency=<num>` Prevents Jest from executing more than the specified amount of tests at the same time. Only affects tests that use `test.concurrent`. ### `--maxWorkers=<num>|<string>` Alias: `-w`. Specifies the maximum number of workers the worker-pool will spawn for running tests. In single run mode, this defaults to the number of the cores available on your machine minus one for the main thread. In watch mode, this defaults to half of the available cores on your machine to ensure Jest is unobtrusive and does not grind your machine to a halt. It may be useful to adjust this in resource limited environments like CIs but the defaults should be adequate for most use-cases. For environments with variable CPUs available, you can use percentage based configuration: `--maxWorkers=50%` ### `--noStackTrace` Disables stack trace in test results output. ### `--notify` Activates notifications for test results. Good for when you don't want your consciousness to be able to focus on anything except JavaScript testing. ### `--onlyChanged` Alias: `-o`. Attempts to identify which tests to run based on which files have changed in the current repository. Only works if you're running tests in a git/hg repository at the moment and requires a static dependency graph (ie. no dynamic requires). ### `--outputFile=<filename>` Write test results to a file when the `--json` option is also specified. The returned JSON structure is documented in [testResultsProcessor](configuration#testresultsprocessor-string). ### `--passWithNoTests` Allows the test suite to pass when no files are found. ### `--projects <path1> ... <pathN>` Run tests from one or more projects, found in the specified paths; also takes path globs. This option is the CLI equivalent of the [`projects`](configuration#projects-arraystring--projectconfig) configuration option. Note that if configuration files are found in the specified paths, *all* projects specified within those configuration files will be run. ### `--reporters` Run tests with specified reporters. [Reporter options](configuration#reporters-arraymodulename--modulename-options) are not available via CLI. Example with multiple reporters: `jest --reporters="default" --reporters="jest-junit"` ### `--resetMocks` Automatically reset mock state before every test. Equivalent to calling [`jest.resetAllMocks()`](jest-object#jestresetallmocks) before each test. This will lead to any mocks having their fake implementations removed but does not restore their initial implementation. ### `--restoreMocks` Automatically restore mock state and implementation before every test. Equivalent to calling [`jest.restoreAllMocks()`](jest-object#jestrestoreallmocks) before each test. This will lead to any mocks having their fake implementations removed and restores their initial implementation. ### `--roots` A list of paths to directories that Jest should use to search for files in. ### `--runInBand` Alias: `-i`. Run all tests serially in the current process, rather than creating a worker pool of child processes that run tests. This can be useful for debugging. ### `--runTestsByPath` Run only the tests that were specified with their exact paths. tip The default regex matching works fine on small runs, but becomes slow if provided with multiple patterns and/or against a lot of tests. This option replaces the regex matching logic and by that optimizes the time it takes Jest to filter specific test files. ### `--selectProjects <project1> ... <projectN>` Run the tests of the specified projects. Jest uses the attribute `displayName` in the configuration to identify each project. If you use this option, you should provide a `displayName` to all your projects. ### `--setupFilesAfterEnv <path1> ... <pathN>` A list of paths to modules that run some code to configure or to set up the testing framework before each test. Beware that files imported by the setup scripts will not be mocked during testing. ### `--shard` The test suite shard to execute in a format of `(?<shardIndex>\d+)/(?<shardCount>\d+)`. `shardIndex` describes which shard to select while `shardCount` controls the number of shards the suite should be split into. `shardIndex` and `shardCount` have to be 1-based, positive numbers, and `shardIndex` has to be lower than or equal to `shardCount`. When `shard` is specified the configured [`testSequencer`](configuration#testsequencer-string) has to implement a `shard` method. For example, to split the suite into three shards, each running one third of the tests: ``` jest --shard=1/3 jest --shard=2/3 jest --shard=3/3 ``` ### `--showConfig` Print your Jest config and then exits. ### `--silent` Prevent tests from printing messages through the console. ### `--testEnvironmentOptions=<json string>` A JSON string with options that will be passed to the `testEnvironment`. The relevant options depend on the environment. ### `--testLocationInResults` Adds a `location` field to test results. Useful if you want to report the location of a test in a reporter. Note that `column` is 0-indexed while `line` is not. ``` { "column": 4, "line": 5 } ``` ### `--testMatch glob1 ... globN` The glob patterns Jest uses to detect test files. Please refer to the [`testMatch` configuration](configuration#testmatch-arraystring) for details. ### `--testNamePattern=<regex>` Alias: -t. Run only tests with a name that matches the regex. For example, suppose you want to run only tests related to authorization which will have names like "GET /api/posts with auth", then you can use jest -t=auth. tip The regex is matched against the full name, which is a combination of the test name and all its surrounding describe blocks. ### `--testPathIgnorePatterns=<regex>|[array]` A single or array of regexp pattern strings that are tested against all tests paths before executing the test. Contrary to `--testPathPattern`, it will only run those tests with a path that does not match with the provided regexp expressions. To pass as an array use escaped parentheses and space delimited regexps such as `\(/node_modules/ /tests/e2e/\)`. Alternatively, you can omit parentheses by combining regexps into a single regexp like `/node_modules/|/tests/e2e/`. These two examples are equivalent. ### `--testPathPattern=<regex>` A regexp pattern string that is matched against all tests paths before executing the test. On Windows, you will need to use `/` as a path separator or escape `\` as `\\`. ### `--testRunner=<path>` Lets you specify a custom test runner. ### `--testSequencer=<path>` Lets you specify a custom test sequencer. Please refer to the [`testSequencer` configuration](configuration#testsequencer-string) for details. ### `--testTimeout=<number>` Default timeout of a test in milliseconds. Default value: 5000. ### `--updateSnapshot` Alias: `-u`. Use this flag to re-record every snapshot that fails during this test run. Can be used together with a test suite pattern or with `--testNamePattern` to re-record snapshots. ### `--useStderr` Divert all output to stderr. ### `--verbose` Display individual test results with the test suite hierarchy. ### `--version` Alias: `-v`. Print the version and exit. ### `--watch` Watch files for changes and rerun tests related to changed files. If you want to re-run all tests when a file has changed, use the `--watchAll` option instead. ### `--watchAll` Watch files for changes and rerun all tests when something changes. If you want to re-run only the tests that depend on the changed files, use the `--watch` option. Use `--watchAll=false` to explicitly disable the watch mode. Note that in most CI environments, this is automatically handled for you. ### `--watchman` Whether to use [`watchman`](https://facebook.github.io/watchman/) for file crawling. Defaults to `true`. Disable using `--no-watchman`.
programming_docs
jest Using with puppeteer Using with puppeteer ==================== With the [Global Setup/Teardown](configuration#globalsetup-string) and [Async Test Environment](configuration#testenvironment-string) APIs, Jest can work smoothly with [puppeteer](https://github.com/GoogleChrome/puppeteer). > Generating code coverage for test files using Puppeteer is currently not possible if your test uses `page.$eval`, `page.$$eval` or `page.evaluate` as the passed function is executed outside of Jest's scope. Check out [issue #7962](https://github.com/facebook/jest/issues/7962#issuecomment-495272339) on GitHub for a workaround. > > Use jest-puppeteer Preset ------------------------- [Jest Puppeteer](https://github.com/smooth-code/jest-puppeteer) provides all required configuration to run your tests using Puppeteer. 1. First, install `jest-puppeteer` * npm * Yarn ``` npm install --save-dev jest-puppeteer ``` ``` yarn add --dev jest-puppeteer ``` 2. Specify preset in your [Jest configuration](configuration): ``` { "preset": "jest-puppeteer" } ``` 3. Write your test ``` describe('Google', () => { beforeAll(async () => { await page.goto('https://google.com'); }); it('should be titled "Google"', async () => { await expect(page.title()).resolves.toMatch('Google'); }); }); ``` There's no need to load any dependencies. Puppeteer's `page` and `browser` classes will automatically be exposed See [documentation](https://github.com/smooth-code/jest-puppeteer). Custom example without jest-puppeteer preset -------------------------------------------- You can also hook up puppeteer from scratch. The basic idea is to: 1. launch & file the websocket endpoint of puppeteer with Global Setup 2. connect to puppeteer from each Test Environment 3. close puppeteer with Global Teardown Here's an example of the GlobalSetup script ``` const {mkdir, writeFile} = require('fs').promises; const os = require('os'); const path = require('path'); const puppeteer = require('puppeteer'); const DIR = path.join(os.tmpdir(), 'jest_puppeteer_global_setup'); module.exports = async function () { const browser = await puppeteer.launch(); // store the browser instance so we can teardown it later // this global is only available in the teardown but not in TestEnvironments globalThis.__BROWSER_GLOBAL__ = browser; // use the file system to expose the wsEndpoint for TestEnvironments await mkdir(DIR, {recursive: true}); await writeFile(path.join(DIR, 'wsEndpoint'), browser.wsEndpoint()); }; ``` setup.js Then we need a custom Test Environment for puppeteer ``` const {readFile} = require('fs').promises; const os = require('os'); const path = require('path'); const puppeteer = require('puppeteer'); const NodeEnvironment = require('jest-environment-node').default; const DIR = path.join(os.tmpdir(), 'jest_puppeteer_global_setup'); class PuppeteerEnvironment extends NodeEnvironment { constructor(config) { super(config); } async setup() { await super.setup(); // get the wsEndpoint const wsEndpoint = await readFile(path.join(DIR, 'wsEndpoint'), 'utf8'); if (!wsEndpoint) { throw new Error('wsEndpoint not found'); } // connect to puppeteer this.global.__BROWSER_GLOBAL__ = await puppeteer.connect({ browserWSEndpoint: wsEndpoint, }); } async teardown() { await super.teardown(); } getVmContext() { return super.getVmContext(); } } module.exports = PuppeteerEnvironment; ``` puppeteer\_environment.js Finally, we can close the puppeteer instance and clean-up the file ``` const fs = require('fs').promises; const os = require('os'); const path = require('path'); const DIR = path.join(os.tmpdir(), 'jest_puppeteer_global_setup'); module.exports = async function () { // close the browser instance await globalThis.__BROWSER_GLOBAL__.close(); // clean-up the wsEndpoint file await fs.rm(DIR, {recursive: true, force: true}); }; ``` teardown.js With all the things set up, we can now write our tests like this: ``` const timeout = 5000; describe( '/ (Home Page)', () => { let page; beforeAll(async () => { page = await globalThis.__BROWSER_GLOBAL__.newPage(); await page.goto('https://google.com'); }, timeout); it('should load without error', async () => { const text = await page.evaluate(() => document.body.textContent); expect(text).toContain('google'); }); }, timeout, ); ``` test.js Finally, set `jest.config.js` to read from these files. (The `jest-puppeteer` preset does something like this under the hood.) ``` module.exports = { globalSetup: './setup.js', globalTeardown: './teardown.js', testEnvironment: './puppeteer_environment.js', }; ``` Here's the code of [full working example](https://github.com/xfumihiro/jest-puppeteer-example). jest More Resources More Resources ============== By now you should have a good idea of how Jest can help you test your applications. If you're interested in learning more, here's some related stuff you might want to check out. Browse the docs --------------- * Learn about [Snapshot Testing](snapshot-testing), [Mock Functions](mock-functions), and more in our in-depth guides. * Migrate your existing tests to Jest by following our [migration guide](migration-guide). * Learn how to [configure Jest](configuration). * Look at the full [API Reference](api). * [Troubleshoot](troubleshooting) problems with Jest. Learn by example ---------------- You will find a number of example test cases in the [`examples`](https://github.com/facebook/jest/tree/main/examples) folder on GitHub. You can also learn from the excellent tests used by the [React](https://github.com/facebook/react/tree/main/packages/react/src/__tests__), [Relay](https://github.com/facebook/relay/tree/main/packages/react-relay/__tests__), and [React Native](https://github.com/facebook/react-native/tree/main/Libraries/Animated/__tests__) projects. Join the community ------------------ Ask questions and find answers from other Jest users like you. [Reactiflux](https://discord.gg/j6FKKQQrW9) is a Discord chat where a lot of Jest discussion happens. Check out the `#testing` channel. Follow the [Jest Twitter account](https://twitter.com/fbjest) and [blog](https://jestjs.io/blog/) to find out what's happening in the world of Jest. jest ECMAScript Modules ECMAScript Modules ================== Jest ships with *experimental* support for ECMAScript Modules (ESM). > Note that due to its experimental nature there are many bugs and missing features in Jest's implementation, both known and unknown. You should check out the [tracking issue](https://github.com/facebook/jest/issues/9430) and the [label](https://github.com/facebook/jest/labels/ES%20Modules) on the issue tracker for the latest status. > > > Also note that the APIs Jest uses to implement ESM support is still [considered experimental by Node](https://nodejs.org/api/vm.html#vm_class_vm_module) (as of version `14.13.1`). > > With the warnings out of the way, this is how you activate ESM support in your tests. 1. Ensure you either disable [code transforms](configuration#transform-objectstring-pathtotransformer--pathtotransformer-object) by passing `transform: {}` or otherwise configure your transformer to emit ESM rather than the default CommonJS (CJS). 2. Execute `node` with `--experimental-vm-modules`, e.g. `node --experimental-vm-modules node_modules/jest/bin/jest.js` or `NODE_OPTIONS=--experimental-vm-modules npx jest` etc.. On Windows, you can use [`cross-env`](https://github.com/kentcdodds/cross-env) to be able to set environment variables. If you use Yarn, you can use `yarn node --experimental-vm-modules $(yarn bin jest)`. This command will also work if you use [Yarn Plug'n'Play](https://yarnpkg.com/features/pnp). 3. Beyond that, we attempt to follow `node`'s logic for activating "ESM mode" (such as looking at `type` in `package.json` or `.mjs` files), see [their docs](https://nodejs.org/api/esm.html#esm_enabling) for details. 4. If you want to treat other file extensions (such as `.jsx` or `.ts`) as ESM, please use the [`extensionsToTreatAsEsm` option](configuration#extensionstotreatasesm-arraystring). Differences between ESM and CommonJS ------------------------------------ Most of the differences are explained in [Node's documentation](https://nodejs.org/api/esm.html#esm_differences_between_es_modules_and_commonjs), but in addition to the things mentioned there, Jest injects a special variable into all executed files - the [`jest` object](jest-object). To access this object in ESM, you need to import it from the `@jest/globals` module or use `import.meta`. ``` import {jest} from '@jest/globals'; jest.useFakeTimers(); // etc. // alternatively import.meta.jest.useFakeTimers(); // jest === import.meta.jest => true ``` Please note that we currently don't support `jest.mock` in a clean way in ESM, but that is something we intend to add proper support for in the future. Follow [this issue](https://github.com/facebook/jest/issues/10025) for updates. jest Troubleshooting Troubleshooting =============== Uh oh, something went wrong? Use this guide to resolve issues with Jest. Tests are Failing and You Don't Know Why ---------------------------------------- Try using the debugging support built into Node. Note: This will only work in Node.js 8+. Place a `debugger;` statement in any of your tests, and then, in your project's directory, run: ``` node --inspect-brk node_modules/.bin/jest --runInBand [any other arguments here] or on Windows node --inspect-brk ./node_modules/jest/bin/jest.js --runInBand [any other arguments here] ``` This will run Jest in a Node process that an external debugger can connect to. Note that the process will pause until the debugger has connected to it. To debug in Google Chrome (or any Chromium-based browser), open your browser and go to `chrome://inspect` and click on "Open Dedicated DevTools for Node", which will give you a list of available node instances you can connect to. Click on the address displayed in the terminal (usually something like `localhost:9229`) after running the above command, and you will be able to debug Jest using Chrome's DevTools. The Chrome Developer Tools will be displayed, and a breakpoint will be set at the first line of the Jest CLI script (this is done to give you time to open the developer tools and to prevent Jest from executing before you have time to do so). Click the button that looks like a "play" button in the upper right hand side of the screen to continue execution. When Jest executes the test that contains the `debugger` statement, execution will pause and you can examine the current scope and call stack. > Note: the `--runInBand` cli option makes sure Jest runs the test in the same process rather than spawning processes for individual tests. Normally Jest parallelizes test runs across processes but it is hard to debug many processes at the same time. > > Debugging in VS Code -------------------- There are multiple ways to debug Jest tests with [Visual Studio Code's](https://code.visualstudio.com) built-in [debugger](https://code.visualstudio.com/docs/nodejs/nodejs-debugging). To attach the built-in debugger, run your tests as aforementioned: ``` node --inspect-brk node_modules/.bin/jest --runInBand [any other arguments here] or on Windows node --inspect-brk ./node_modules/jest/bin/jest.js --runInBand [any other arguments here] ``` Then attach VS Code's debugger using the following `launch.json` config: ``` { "version": "0.2.0", "configurations": [ { "type": "node", "request": "attach", "name": "Attach", "port": 9229 } ] } ``` To automatically launch and attach to a process running your tests, use the following configuration: ``` { "version": "0.2.0", "configurations": [ { "name": "Debug Jest Tests", "type": "node", "request": "launch", "runtimeArgs": [ "--inspect-brk", "${workspaceRoot}/node_modules/.bin/jest", "--runInBand" ], "console": "integratedTerminal", "internalConsoleOptions": "neverOpen" } ] } ``` or the following for Windows: ``` { "version": "0.2.0", "configurations": [ { "name": "Debug Jest Tests", "type": "node", "request": "launch", "runtimeArgs": [ "--inspect-brk", "${workspaceRoot}/node_modules/jest/bin/jest.js", "--runInBand" ], "console": "integratedTerminal", "internalConsoleOptions": "neverOpen" } ] } ``` If you are using Facebook's [`create-react-app`](https://github.com/facebookincubator/create-react-app), you can debug your Jest tests with the following configuration: ``` { "version": "0.2.0", "configurations": [ { "name": "Debug CRA Tests", "type": "node", "request": "launch", "runtimeExecutable": "${workspaceRoot}/node_modules/.bin/react-scripts", "args": [ "test", "--runInBand", "--no-cache", "--env=jsdom", "--watchAll=false" ], "cwd": "${workspaceRoot}", "console": "integratedTerminal", "internalConsoleOptions": "neverOpen" } ] } ``` More information on Node debugging can be found [here](https://nodejs.org/api/debugger.html). Debugging in WebStorm --------------------- [WebStorm](https://www.jetbrains.com/webstorm/) has built-in support for Jest. Read [Testing With Jest in WebStorm](https://blog.jetbrains.com/webstorm/2018/10/testing-with-jest-in-webstorm/) to learn more. Caching Issues -------------- The transform script was changed or Babel was updated and the changes aren't being recognized by Jest? Retry with [`--no-cache`](cli#--cache). Jest caches transformed module files to speed up test execution. If you are using your own custom transformer, consider adding a `getCacheKey` function to it: [getCacheKey in Relay](https://github.com/facebook/relay/blob/58cf36c73769690f0bbf90562707eadb062b029d/scripts/jest/preprocessor.js#L56-L61). Unresolved Promises ------------------- If a promise doesn't resolve at all, this error might be thrown: ``` - Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.` ``` Most commonly this is being caused by conflicting Promise implementations. Consider replacing the global promise implementation with your own, for example `globalThis.Promise = jest.requireActual('promise');` and/or consolidate the used Promise libraries to a single one. If your test is long running, you may want to consider to increase the timeout by calling `jest.setTimeout` ``` jest.setTimeout(10000); // 10 second timeout ``` Watchman Issues --------------- Try running Jest with [`--no-watchman`](cli#--watchman) or set the `watchman` configuration option to `false`. Also see [watchman troubleshooting](https://facebook.github.io/watchman/docs/troubleshooting). Tests are Extremely Slow on Docker and/or Continuous Integration (CI) server. ----------------------------------------------------------------------------- While Jest is most of the time extremely fast on modern multi-core computers with fast SSDs, it may be slow on certain setups as our users [have](https://github.com/facebook/jest/issues/1395) [discovered](https://github.com/facebook/jest/issues/1524#issuecomment-260246008). Based on the [findings](https://github.com/facebook/jest/issues/1524#issuecomment-262366820), one way to mitigate this issue and improve the speed by up to 50% is to run tests sequentially. In order to do this you can run tests in the same thread using [`--runInBand`](cli#--runinband): ``` # Using Jest CLI jest --runInBand # Using yarn test (e.g. with create-react-app) yarn test --runInBand ``` Another alternative to expediting test execution time on Continuous Integration Servers such as Travis-CI is to set the max worker pool to ~*4*. Specifically on Travis-CI, this can reduce test execution time in half. Note: The Travis CI *free* plan available for open source projects only includes 2 CPU cores. ``` # Using Jest CLI jest --maxWorkers=4 # Using yarn test (e.g. with create-react-app) yarn test --maxWorkers=4 ``` If you use GitHub Actions, you can use [`github-actions-cpu-cores`](https://github.com/SimenB/github-actions-cpu-cores) to detect number of CPUs, and pass that to Jest. ``` - name: Get number of CPU cores id: cpu-cores uses: SimenB/github-actions-cpu-cores@v1 - name: run tests run: yarn jest --max-workers ${{ steps.cpu-cores.outputs.count }} ``` Another thing you can do is use the [`shard`](cli#--shard) flag to parallelize the test run across multiple machines. `coveragePathIgnorePatterns` seems to not have any effect. ----------------------------------------------------------- Make sure you are not using the `babel-plugin-istanbul` plugin. Jest wraps Istanbul, and therefore also tells Istanbul what files to instrument with coverage collection. When using `babel-plugin-istanbul`, every file that is processed by Babel will have coverage collection code, hence it is not being ignored by `coveragePathIgnorePatterns`. Defining Tests -------------- Tests must be defined synchronously for Jest to be able to collect your tests. As an example to show why this is the case, imagine we wrote a test like so: ``` // Don't do this it will not work setTimeout(() => { it('passes', () => expect(1).toBe(1)); }, 0); ``` When Jest runs your test to collect the `test`s it will not find any because we have set the definition to happen asynchronously on the next tick of the event loop. *Note:* This means when you are using `test.each` you cannot set the table asynchronously within a `beforeEach` / `beforeAll`. Still unresolved? ----------------- See [Help](https://jestjs.io/help). jest Mock Functions Mock Functions ============== Mock functions allow you to test the links between code by erasing the actual implementation of a function, capturing calls to the function (and the parameters passed in those calls), capturing instances of constructor functions when instantiated with `new`, and allowing test-time configuration of return values. There are two ways to mock functions: Either by creating a mock function to use in test code, or writing a [`manual mock`](manual-mocks) to override a module dependency. Using a mock function --------------------- Let's imagine we're testing an implementation of a function `forEach`, which invokes a callback for each item in a supplied array. ``` function forEach(items, callback) { for (let index = 0; index < items.length; index++) { callback(items[index]); } } ``` To test this function, we can use a mock function, and inspect the mock's state to ensure the callback is invoked as expected. ``` const mockCallback = jest.fn(x => 42 + x); forEach([0, 1], mockCallback); // The mock function is called twice expect(mockCallback.mock.calls.length).toBe(2); // The first argument of the first call to the function was 0 expect(mockCallback.mock.calls[0][0]).toBe(0); // The first argument of the second call to the function was 1 expect(mockCallback.mock.calls[1][0]).toBe(1); // The return value of the first call to the function was 42 expect(mockCallback.mock.results[0].value).toBe(42); ``` `.mock` property ----------------- All mock functions have this special `.mock` property, which is where data about how the function has been called and what the function returned is kept. The `.mock` property also tracks the value of `this` for each call, so it is possible to inspect this as well: ``` const myMock1 = jest.fn(); const a = new myMock1(); console.log(myMock1.mock.instances); // > [ <a> ] const myMock2 = jest.fn(); const b = {}; const bound = myMock2.bind(b); bound(); console.log(myMock2.mock.contexts); // > [ <b> ] ``` These mock members are very useful in tests to assert how these functions get called, instantiated, or what they returned: ``` // The function was called exactly once expect(someMockFunction.mock.calls.length).toBe(1); // The first arg of the first call to the function was 'first arg' expect(someMockFunction.mock.calls[0][0]).toBe('first arg'); // The second arg of the first call to the function was 'second arg' expect(someMockFunction.mock.calls[0][1]).toBe('second arg'); // The return value of the first call to the function was 'return value' expect(someMockFunction.mock.results[0].value).toBe('return value'); // The function was called with a certain `this` context: the `element` object. expect(someMockFunction.mock.contexts[0]).toBe(element); // This function was instantiated exactly twice expect(someMockFunction.mock.instances.length).toBe(2); // The object returned by the first instantiation of this function // had a `name` property whose value was set to 'test' expect(someMockFunction.mock.instances[0].name).toEqual('test'); // The first argument of the last call to the function was 'test' expect(someMockFunction.mock.lastCall[0]).toBe('test'); ``` Mock Return Values ------------------ Mock functions can also be used to inject test values into your code during a test: ``` const myMock = jest.fn(); console.log(myMock()); // > undefined myMock.mockReturnValueOnce(10).mockReturnValueOnce('x').mockReturnValue(true); console.log(myMock(), myMock(), myMock(), myMock()); // > 10, 'x', true, true ``` Mock functions are also very effective in code that uses a functional continuation-passing style. Code written in this style helps avoid the need for complicated stubs that recreate the behavior of the real component they're standing in for, in favor of injecting values directly into the test right before they're used. ``` const filterTestFn = jest.fn(); // Make the mock return `true` for the first call, // and `false` for the second call filterTestFn.mockReturnValueOnce(true).mockReturnValueOnce(false); const result = [11, 12].filter(num => filterTestFn(num)); console.log(result); // > [11] console.log(filterTestFn.mock.calls[0][0]); // 11 console.log(filterTestFn.mock.calls[1][0]); // 12 ``` Most real-world examples actually involve getting ahold of a mock function on a dependent component and configuring that, but the technique is the same. In these cases, try to avoid the temptation to implement logic inside of any function that's not directly being tested. Mocking Modules --------------- Suppose we have a class that fetches users from our API. The class uses [axios](https://github.com/axios/axios) to call the API then returns the `data` attribute which contains all the users: ``` import axios from 'axios'; class Users { static all() { return axios.get('/users.json').then(resp => resp.data); } } export default Users; ``` users.js Now, in order to test this method without actually hitting the API (and thus creating slow and fragile tests), we can use the `jest.mock(...)` function to automatically mock the axios module. Once we mock the module we can provide a `mockResolvedValue` for `.get` that returns the data we want our test to assert against. In effect, we are saying that we want `axios.get('/users.json')` to return a fake response. ``` import axios from 'axios'; import Users from './users'; jest.mock('axios'); test('should fetch users', () => { const users = [{name: 'Bob'}]; const resp = {data: users}; axios.get.mockResolvedValue(resp); // or you could use the following depending on your use case: // axios.get.mockImplementation(() => Promise.resolve(resp)) return Users.all().then(data => expect(data).toEqual(users)); }); ``` users.test.js Mocking Partials ---------------- Subsets of a module can be mocked and the rest of the module can keep their actual implementation: ``` export const foo = 'foo'; export const bar = () => 'bar'; export default () => 'baz'; ``` foo-bar-baz.js ``` //test.js import defaultExport, {bar, foo} from '../foo-bar-baz'; jest.mock('../foo-bar-baz', () => { const originalModule = jest.requireActual('../foo-bar-baz'); //Mock the default export and named export 'foo' return { __esModule: true, ...originalModule, default: jest.fn(() => 'mocked baz'), foo: 'mocked foo', }; }); test('should do a partial mock', () => { const defaultExportResult = defaultExport(); expect(defaultExportResult).toBe('mocked baz'); expect(defaultExport).toHaveBeenCalled(); expect(foo).toBe('mocked foo'); expect(bar()).toBe('bar'); }); ``` Mock Implementations -------------------- Still, there are cases where it's useful to go beyond the ability to specify return values and full-on replace the implementation of a mock function. This can be done with `jest.fn` or the `mockImplementationOnce` method on mock functions. ``` const myMockFn = jest.fn(cb => cb(null, true)); myMockFn((err, val) => console.log(val)); // > true ``` The `mockImplementation` method is useful when you need to define the default implementation of a mock function that is created from another module: ``` module.exports = function () { // some implementation; }; ``` foo.js ``` jest.mock('../foo'); // this happens automatically with automocking const foo = require('../foo'); // foo is a mock function foo.mockImplementation(() => 42); foo(); // > 42 ``` test.js When you need to recreate a complex behavior of a mock function such that multiple function calls produce different results, use the `mockImplementationOnce` method: ``` const myMockFn = jest .fn() .mockImplementationOnce(cb => cb(null, true)) .mockImplementationOnce(cb => cb(null, false)); myMockFn((err, val) => console.log(val)); // > true myMockFn((err, val) => console.log(val)); // > false ``` When the mocked function runs out of implementations defined with `mockImplementationOnce`, it will execute the default implementation set with `jest.fn` (if it is defined): ``` const myMockFn = jest .fn(() => 'default') .mockImplementationOnce(() => 'first call') .mockImplementationOnce(() => 'second call'); console.log(myMockFn(), myMockFn(), myMockFn(), myMockFn()); // > 'first call', 'second call', 'default', 'default' ``` For cases where we have methods that are typically chained (and thus always need to return `this`), we have a sugary API to simplify this in the form of a `.mockReturnThis()` function that also sits on all mocks: ``` const myObj = { myMethod: jest.fn().mockReturnThis(), }; // is the same as const otherObj = { myMethod: jest.fn(function () { return this; }), }; ``` Mock Names ---------- You can optionally provide a name for your mock functions, which will be displayed instead of "jest.fn()" in the test error output. Use this if you want to be able to quickly identify the mock function reporting an error in your test output. ``` const myMockFn = jest .fn() .mockReturnValue('default') .mockImplementation(scalar => 42 + scalar) .mockName('add42'); ``` Custom Matchers --------------- Finally, in order to make it less demanding to assert how mock functions have been called, we've added some custom matcher functions for you: ``` // The mock function was called at least once expect(mockFunc).toHaveBeenCalled(); // The mock function was called at least once with the specified args expect(mockFunc).toHaveBeenCalledWith(arg1, arg2); // The last call to the mock function was called with the specified args expect(mockFunc).toHaveBeenLastCalledWith(arg1, arg2); // All calls and the name of the mock is written as a snapshot expect(mockFunc).toMatchSnapshot(); ``` These matchers are sugar for common forms of inspecting the `.mock` property. You can always do this manually yourself if that's more to your taste or if you need to do something more specific: ``` // The mock function was called at least once expect(mockFunc.mock.calls.length).toBeGreaterThan(0); // The mock function was called at least once with the specified args expect(mockFunc.mock.calls).toContainEqual([arg1, arg2]); // The last call to the mock function was called with the specified args expect(mockFunc.mock.calls[mockFunc.mock.calls.length - 1]).toEqual([ arg1, arg2, ]); // The first arg of the last call to the mock function was `42` // (note that there is no sugar helper for this specific of an assertion) expect(mockFunc.mock.calls[mockFunc.mock.calls.length - 1][0]).toBe(42); // A snapshot will check that a mock was invoked the same number of times, // in the same order, with the same arguments. It will also assert on the name. expect(mockFunc.mock.calls).toEqual([[arg1, arg2]]); expect(mockFunc.getMockName()).toBe('a mock name'); ``` For a complete list of matchers, check out the [reference docs](expect).
programming_docs
jest Using Matchers Using Matchers ============== Jest uses "matchers" to let you test values in different ways. This document will introduce some commonly used matchers. For the full list, see the [`expect` API doc](expect). Common Matchers --------------- The simplest way to test a value is with exact equality. ``` test('two plus two is four', () => { expect(2 + 2).toBe(4); }); ``` In this code, `expect(2 + 2)` returns an "expectation" object. You typically won't do much with these expectation objects except call matchers on them. In this code, `.toBe(4)` is the matcher. When Jest runs, it tracks all the failing matchers so that it can print out nice error messages for you. `toBe` uses `Object.is` to test exact equality. If you want to check the value of an object, use `toEqual` instead: ``` test('object assignment', () => { const data = {one: 1}; data['two'] = 2; expect(data).toEqual({one: 1, two: 2}); }); ``` `toEqual` recursively checks every field of an object or array. You can also test for the opposite of a matcher: ``` test('adding positive numbers is not zero', () => { for (let a = 1; a < 10; a++) { for (let b = 1; b < 10; b++) { expect(a + b).not.toBe(0); } } }); ``` Truthiness ---------- In tests, you sometimes need to distinguish between `undefined`, `null`, and `false`, but you sometimes do not want to treat these differently. Jest contains helpers that let you be explicit about what you want. * `toBeNull` matches only `null` * `toBeUndefined` matches only `undefined` * `toBeDefined` is the opposite of `toBeUndefined` * `toBeTruthy` matches anything that an `if` statement treats as true * `toBeFalsy` matches anything that an `if` statement treats as false For example: ``` test('null', () => { const n = null; expect(n).toBeNull(); expect(n).toBeDefined(); expect(n).not.toBeUndefined(); expect(n).not.toBeTruthy(); expect(n).toBeFalsy(); }); test('zero', () => { const z = 0; expect(z).not.toBeNull(); expect(z).toBeDefined(); expect(z).not.toBeUndefined(); expect(z).not.toBeTruthy(); expect(z).toBeFalsy(); }); ``` You should use the matcher that most precisely corresponds to what you want your code to be doing. Numbers ------- Most ways of comparing numbers have matcher equivalents. ``` test('two plus two', () => { const value = 2 + 2; expect(value).toBeGreaterThan(3); expect(value).toBeGreaterThanOrEqual(3.5); expect(value).toBeLessThan(5); expect(value).toBeLessThanOrEqual(4.5); // toBe and toEqual are equivalent for numbers expect(value).toBe(4); expect(value).toEqual(4); }); ``` For floating point equality, use `toBeCloseTo` instead of `toEqual`, because you don't want a test to depend on a tiny rounding error. ``` test('adding floating point numbers', () => { const value = 0.1 + 0.2; //expect(value).toBe(0.3); This won't work because of rounding error expect(value).toBeCloseTo(0.3); // This works. }); ``` Strings ------- You can check strings against regular expressions with `toMatch`: ``` test('there is no I in team', () => { expect('team').not.toMatch(/I/); }); test('but there is a "stop" in Christoph', () => { expect('Christoph').toMatch(/stop/); }); ``` Arrays and iterables -------------------- You can check if an array or iterable contains a particular item using `toContain`: ``` const shoppingList = [ 'diapers', 'kleenex', 'trash bags', 'paper towels', 'milk', ]; test('the shopping list has milk on it', () => { expect(shoppingList).toContain('milk'); expect(new Set(shoppingList)).toContain('milk'); }); ``` Exceptions ---------- If you want to test whether a particular function throws an error when it's called, use `toThrow`. ``` function compileAndroidCode() { throw new Error('you are using the wrong JDK'); } test('compiling android goes as expected', () => { expect(() => compileAndroidCode()).toThrow(); expect(() => compileAndroidCode()).toThrow(Error); // You can also use the exact error message or a regexp expect(() => compileAndroidCode()).toThrow('you are using the wrong JDK'); expect(() => compileAndroidCode()).toThrow(/JDK/); }); ``` > Note: the function that throws an exception needs to be invoked within a wrapping function otherwise the `toThrow` assertion will fail. > > And More -------- This is just a taste. For a complete list of matchers, check out the [reference docs](expect). Once you've learned about the matchers that are available, a good next step is to check out how Jest lets you [test asynchronous code](asynchronous). jest Jest Community Jest Community ============== The community around Jest is working hard to make the testing experience even greater. [jest-community](https://github.com/jest-community) is a new GitHub organization for high quality Jest additions curated by Jest maintainers and collaborators. It already features some of our favorite projects, to name a few: * [vscode-jest](https://github.com/jest-community/vscode-jest) * [jest-extended](https://github.com/jest-community/jest-extended) * [eslint-plugin-jest](https://github.com/jest-community/eslint-plugin-jest) * [awesome-jest](https://github.com/jest-community/awesome-jest) Community projects under one organization are a great way for Jest to experiment with new ideas/techniques and approaches. Encourage contributions from the community and publish contributions independently at a faster pace. Awesome Jest ------------ The jest-community org maintains an [awesome-jest](https://github.com/jest-community/awesome-jest) list of great projects and resources related to Jest. If you have something awesome to share, feel free to reach out to us! We'd love to share your project on the awesome-jest list ([send a PR here](https://github.com/jest-community/awesome-jest/pulls)) or if you would like to transfer your project to the jest-community org reach out to one of the owners of the org. jest Snapshot Testing Snapshot Testing ================ Snapshot tests are a very useful tool whenever you want to make sure your UI does not change unexpectedly. A typical snapshot test case renders a UI component, takes a snapshot, then compares it to a reference snapshot file stored alongside the test. The test will fail if the two snapshots do not match: either the change is unexpected, or the reference snapshot needs to be updated to the new version of the UI component. Snapshot Testing with Jest -------------------------- A similar approach can be taken when it comes to testing your React components. Instead of rendering the graphical UI, which would require building the entire app, you can use a test renderer to quickly generate a serializable value for your React tree. Consider this [example test](https://github.com/facebook/jest/blob/main/examples/snapshot/__tests__/link.test.js) for a [Link component](https://github.com/facebook/jest/blob/main/examples/snapshot/Link.js): ``` import renderer from 'react-test-renderer'; import Link from '../Link'; it('renders correctly', () => { const tree = renderer .create(<Link page="http://www.facebook.com">Facebook</Link>) .toJSON(); expect(tree).toMatchSnapshot(); }); ``` The first time this test is run, Jest creates a [snapshot file](https://github.com/facebook/jest/blob/main/examples/snapshot/__tests__/__snapshots__/link.test.js.snap) that looks like this: ``` exports[`renders correctly 1`] = ` <a className="normal" href="http://www.facebook.com" onMouseEnter={[Function]} onMouseLeave={[Function]} > Facebook </a> `; ``` The snapshot artifact should be committed alongside code changes, and reviewed as part of your code review process. Jest uses [pretty-format](https://github.com/facebook/jest/tree/main/packages/pretty-format) to make snapshots human-readable during code review. On subsequent test runs, Jest will compare the rendered output with the previous snapshot. If they match, the test will pass. If they don't match, either the test runner found a bug in your code (in the `<Link>` component in this case) that should be fixed, or the implementation has changed and the snapshot needs to be updated. > Note: The snapshot is directly scoped to the data you render – in our example the `<Link />` component with `page` prop passed to it. This implies that even if any other file has missing props (Say, `App.js`) in the `<Link />` component, it will still pass the test as the test doesn't know the usage of `<Link />` component and it's scoped only to the `Link.js`. Also, rendering the same component with different props in other snapshot tests will not affect the first one, as the tests don't know about each other. > > More information on how snapshot testing works and why we built it can be found on the [release blog post](https://jestjs.io/blog/2016/07/27/jest-14). We recommend reading [this blog post](http://benmccormick.org/2016/09/19/testing-with-jest-snapshots-first-impressions/) to get a good sense of when you should use snapshot testing. We also recommend watching this [egghead video](https://egghead.io/lessons/javascript-use-jest-s-snapshot-testing-feature?pl=testing-javascript-with-jest-a36c4074) on Snapshot Testing with Jest. ### Updating Snapshots It's straightforward to spot when a snapshot test fails after a bug has been introduced. When that happens, go ahead and fix the issue and make sure your snapshot tests are passing again. Now, let's talk about the case when a snapshot test is failing due to an intentional implementation change. One such situation can arise if we intentionally change the address the Link component in our example is pointing to. ``` // Updated test case with a Link to a different address it('renders correctly', () => { const tree = renderer .create(<Link page="http://www.instagram.com">Instagram</Link>) .toJSON(); expect(tree).toMatchSnapshot(); }); ``` In that case, Jest will print this output: ![](https://d33wubrfki0l68.cloudfront.net/73c6c1232b1cb5546ed8e8ea947f945b49b5d7bc/9fdf5/assets/images/failedsnapshottest-28fddb1a7aa06fe502f8a74c0b0049ef.png) Since we just updated our component to point to a different address, it's reasonable to expect changes in the snapshot for this component. Our snapshot test case is failing because the snapshot for our updated component no longer matches the snapshot artifact for this test case. To resolve this, we will need to update our snapshot artifacts. You can run Jest with a flag that will tell it to re-generate snapshots: ``` jest --updateSnapshot ``` Go ahead and accept the changes by running the above command. You may also use the equivalent single-character `-u` flag to re-generate snapshots if you prefer. This will re-generate snapshot artifacts for all failing snapshot tests. If we had any additional failing snapshot tests due to an unintentional bug, we would need to fix the bug before re-generating snapshots to avoid recording snapshots of the buggy behavior. If you'd like to limit which snapshot test cases get re-generated, you can pass an additional `--testNamePattern` flag to re-record snapshots only for those tests that match the pattern. You can try out this functionality by cloning the [snapshot example](https://github.com/facebook/jest/tree/main/examples/snapshot), modifying the `Link` component, and running Jest. ### Interactive Snapshot Mode Failed snapshots can also be updated interactively in watch mode: Once you enter Interactive Snapshot Mode, Jest will step you through the failed snapshots one test at a time and give you the opportunity to review the failed output. From here you can choose to update that snapshot or skip to the next: ![](https://d33wubrfki0l68.cloudfront.net/62d638794f9612ad8dca3234ff4e8b344c4229e9/93ec5/assets/images/interactivesnapshotupdate-a17d8d77f94702048b4d0e0e4c580719.gif) Once you're finished, Jest will give you a summary before returning back to watch mode: ### Inline Snapshots Inline snapshots behave identically to external snapshots (`.snap` files), except the snapshot values are written automatically back into the source code. This means you can get the benefits of automatically generated snapshots without having to switch to an external file to make sure the correct value was written. **Example:** First, you write a test, calling `.toMatchInlineSnapshot()` with no arguments: ``` it('renders correctly', () => { const tree = renderer .create(<Link page="https://example.com">Example Site</Link>) .toJSON(); expect(tree).toMatchInlineSnapshot(); }); ``` The next time you run Jest, `tree` will be evaluated, and a snapshot will be written as an argument to `toMatchInlineSnapshot`: ``` it('renders correctly', () => { const tree = renderer .create(<Link page="https://example.com">Example Site</Link>) .toJSON(); expect(tree).toMatchInlineSnapshot(` <a className="normal" href="https://example.com" onMouseEnter={[Function]} onMouseLeave={[Function]} > Example Site </a> `); }); ``` That's all there is to it! You can even update the snapshots with `--updateSnapshot` or using the `u` key in `--watch` mode. By default, Jest handles the writing of snapshots into your source code. However, if you're using [prettier](https://www.npmjs.com/package/prettier) in your project, Jest will detect this and delegate the work to prettier instead (including honoring your configuration). ### Property Matchers Often there are fields in the object you want to snapshot which are generated (like IDs and Dates). If you try to snapshot these objects, they will force the snapshot to fail on every run: ``` it('will fail every time', () => { const user = { createdAt: new Date(), id: Math.floor(Math.random() * 20), name: 'LeBron James', }; expect(user).toMatchSnapshot(); }); // Snapshot exports[`will fail every time 1`] = ` Object { "createdAt": 2018-05-19T23:36:09.816Z, "id": 3, "name": "LeBron James", } `; ``` For these cases, Jest allows providing an asymmetric matcher for any property. These matchers are checked before the snapshot is written or tested, and then saved to the snapshot file instead of the received value: ``` it('will check the matchers and pass', () => { const user = { createdAt: new Date(), id: Math.floor(Math.random() * 20), name: 'LeBron James', }; expect(user).toMatchSnapshot({ createdAt: expect.any(Date), id: expect.any(Number), }); }); // Snapshot exports[`will check the matchers and pass 1`] = ` Object { "createdAt": Any<Date>, "id": Any<Number>, "name": "LeBron James", } `; ``` Any given value that is not a matcher will be checked exactly and saved to the snapshot: ``` it('will check the values and pass', () => { const user = { createdAt: new Date(), name: 'Bond... James Bond', }; expect(user).toMatchSnapshot({ createdAt: expect.any(Date), name: 'Bond... James Bond', }); }); // Snapshot exports[`will check the values and pass 1`] = ` Object { "createdAt": Any<Date>, "name": 'Bond... James Bond', } `; ``` tip If the case concerns a string not an object then you need to replace random part of that string on your own before testing the snapshot. You can use for that e.g. [`replace()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/replace) and [regular expressions](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions). ``` const randomNumber = Math.round(Math.random() * 100); const stringWithRandomData = `<div id="${randomNumber}">Lorem ipsum</div>`; const stringWithConstantData = stringWithRandomData.replace(/id="\d+"/, 123); expect(stringWithConstantData).toMatchSnapshot(); ``` Another way is to [mock](mock-functions) the library responsible for generating the random part of the code you're snapshotting. Best Practices -------------- Snapshots are a fantastic tool for identifying unexpected interface changes within your application – whether that interface is an API response, UI, logs, or error messages. As with any testing strategy, there are some best-practices you should be aware of, and guidelines you should follow, in order to use them effectively. ### 1. Treat snapshots as code Commit snapshots and review them as part of your regular code review process. This means treating snapshots as you would any other type of test or code in your project. Ensure that your snapshots are readable by keeping them focused, short, and by using tools that enforce these stylistic conventions. As mentioned previously, Jest uses [`pretty-format`](https://yarnpkg.com/en/package/pretty-format) to make snapshots human-readable, but you may find it useful to introduce additional tools, like [`eslint-plugin-jest`](https://yarnpkg.com/en/package/eslint-plugin-jest) with its [`no-large-snapshots`](https://github.com/jest-community/eslint-plugin-jest/blob/main/docs/rules/no-large-snapshots.md) option, or [`snapshot-diff`](https://yarnpkg.com/en/package/snapshot-diff) with its component snapshot comparison feature, to promote committing short, focused assertions. The goal is to make it easy to review snapshots in pull requests, and fight against the habit of regenerating snapshots when test suites fail instead of examining the root causes of their failure. ### 2. Tests should be deterministic Your tests should be deterministic. Running the same tests multiple times on a component that has not changed should produce the same results every time. You're responsible for making sure your generated snapshots do not include platform specific or other non-deterministic data. For example, if you have a [Clock](https://github.com/facebook/jest/blob/main/examples/snapshot/Clock.js) component that uses `Date.now()`, the snapshot generated from this component will be different every time the test case is run. In this case we can [mock the Date.now() method](mock-functions) to return a consistent value every time the test is run: ``` Date.now = jest.fn(() => 1482363367071); ``` Now, every time the snapshot test case runs, `Date.now()` will return `1482363367071` consistently. This will result in the same snapshot being generated for this component regardless of when the test is run. ### 3. Use descriptive snapshot names Always strive to use descriptive test and/or snapshot names for snapshots. The best names describe the expected snapshot content. This makes it easier for reviewers to verify the snapshots during review, and for anyone to know whether or not an outdated snapshot is the correct behavior before updating. For example, compare: ``` exports[`<UserName /> should handle some test case`] = `null`; exports[`<UserName /> should handle some other test case`] = ` <div> Alan Turing </div> `; ``` To: ``` exports[`<UserName /> should render null`] = `null`; exports[`<UserName /> should render Alan Turing`] = ` <div> Alan Turing </div> `; ``` Since the latter describes exactly what's expected in the output, it's more clear to see when it's wrong: ``` exports[`<UserName /> should render null`] = ` <div> Alan Turing </div> `; exports[`<UserName /> should render Alan Turing`] = `null`; ``` Frequently Asked Questions -------------------------- ### Are snapshots written automatically on Continuous Integration (CI) systems? No, as of Jest 20, snapshots in Jest are not automatically written when Jest is run in a CI system without explicitly passing `--updateSnapshot`. It is expected that all snapshots are part of the code that is run on CI and since new snapshots automatically pass, they should not pass a test run on a CI system. It is recommended to always commit all snapshots and to keep them in version control. ### Should snapshot files be committed? Yes, all snapshot files should be committed alongside the modules they are covering and their tests. They should be considered part of a test, similar to the value of any other assertion in Jest. In fact, snapshots represent the state of the source modules at any given point in time. In this way, when the source modules are modified, Jest can tell what changed from the previous version. It can also provide a lot of additional context during code review in which reviewers can study your changes better. ### Does snapshot testing only work with React components? [React](tutorial-react) and [React Native](tutorial-react-native) components are a good use case for snapshot testing. However, snapshots can capture any serializable value and should be used anytime the goal is testing whether the output is correct. The Jest repository contains many examples of testing the output of Jest itself, the output of Jest's assertion library as well as log messages from various parts of the Jest codebase. See an example of [snapshotting CLI output](https://github.com/facebook/jest/blob/main/e2e/__tests__/console.test.ts) in the Jest repo. ### What's the difference between snapshot testing and visual regression testing? Snapshot testing and visual regression testing are two distinct ways of testing UIs, and they serve different purposes. Visual regression testing tools take screenshots of web pages and compare the resulting images pixel by pixel. With Snapshot testing values are serialized, stored within text files, and compared using a diff algorithm. There are different trade-offs to consider and we listed the reasons why snapshot testing was built in the [Jest blog](https://jestjs.io/blog/2016/07/27/jest-14#why-snapshot-testing). ### Does snapshot testing replace unit testing? Snapshot testing is only one of more than 20 assertions that ship with Jest. The aim of snapshot testing is not to replace existing unit tests, but to provide additional value and make testing painless. In some scenarios, snapshot testing can potentially remove the need for unit testing for a particular set of functionalities (e.g. React components), but they can work together as well. ### What is the performance of snapshot testing regarding speed and size of the generated files? Jest has been rewritten with performance in mind, and snapshot testing is not an exception. Since snapshots are stored within text files, this way of testing is fast and reliable. Jest generates a new file for each test file that invokes the `toMatchSnapshot` matcher. The size of the snapshots is pretty small: For reference, the size of all snapshot files in the Jest codebase itself is less than 300 KB. ### How do I resolve conflicts within snapshot files? Snapshot files must always represent the current state of the modules they are covering. Therefore, if you are merging two branches and encounter a conflict in the snapshot files, you can either resolve the conflict manually or update the snapshot file by running Jest and inspecting the result. ### Is it possible to apply test-driven development principles with snapshot testing? Although it is possible to write snapshot files manually, that is usually not approachable. Snapshots help to figure out whether the output of the modules covered by tests is changed, rather than giving guidance to design the code in the first place. ### Does code coverage work with snapshot testing? Yes, as well as with any other test.
programming_docs
jest Globals Globals ======= In your test files, Jest puts each of these methods and objects into the global environment. You don't have to require or import anything to use them. However, if you prefer explicit imports, you can do `import {describe, expect, test} from '@jest/globals'`. Methods ------- * [Reference](#reference) + [`afterAll(fn, timeout)`](#afterallfn-timeout) + [`afterEach(fn, timeout)`](#aftereachfn-timeout) + [`beforeAll(fn, timeout)`](#beforeallfn-timeout) + [`beforeEach(fn, timeout)`](#beforeeachfn-timeout) + [`describe(name, fn)`](#describename-fn) + [`describe.each(table)(name, fn, timeout)`](#describeeachtablename-fn-timeout) + [`describe.only(name, fn)`](#describeonlyname-fn) + [`describe.only.each(table)(name, fn)`](#describeonlyeachtablename-fn) + [`describe.skip(name, fn)`](#describeskipname-fn) + [`describe.skip.each(table)(name, fn)`](#describeskipeachtablename-fn) + [`test(name, fn, timeout)`](#testname-fn-timeout) + [`test.concurrent(name, fn, timeout)`](#testconcurrentname-fn-timeout) + [`test.concurrent.each(table)(name, fn, timeout)`](#testconcurrenteachtablename-fn-timeout) + [`test.concurrent.only.each(table)(name, fn)`](#testconcurrentonlyeachtablename-fn) + [`test.concurrent.skip.each(table)(name, fn)`](#testconcurrentskipeachtablename-fn) + [`test.each(table)(name, fn, timeout)`](#testeachtablename-fn-timeout) + [`test.failing(name, fn, timeout)`](#testfailingname-fn-timeout) + [`test.failing.each(name, fn, timeout)`](#testfailingeachname-fn-timeout) + [`test.only.failing(name, fn, timeout)`](#testonlyfailingname-fn-timeout) + [`test.skip.failing(name, fn, timeout)`](#testskipfailingname-fn-timeout) + [`test.only(name, fn, timeout)`](#testonlyname-fn-timeout) + [`test.only.each(table)(name, fn)`](#testonlyeachtablename-fn-1) + [`test.skip(name, fn)`](#testskipname-fn) + [`test.skip.each(table)(name, fn)`](#testskipeachtablename-fn) + [`test.todo(name)`](#testtodoname) * [TypeScript Usage](#typescript-usage) + [`.each`](#each) Reference --------- ### `afterAll(fn, timeout)` Runs a function after all the tests in this file have completed. If the function returns a promise or is a generator, Jest waits for that promise to resolve before continuing. Optionally, you can provide a `timeout` (in milliseconds) for specifying how long to wait before aborting. *Note: The default timeout is 5 seconds.* This is often useful if you want to clean up some global setup state that is shared across tests. For example: ``` const globalDatabase = makeGlobalDatabase(); function cleanUpDatabase(db) { db.cleanUp(); } afterAll(() => { cleanUpDatabase(globalDatabase); }); test('can find things', () => { return globalDatabase.find('thing', {}, results => { expect(results.length).toBeGreaterThan(0); }); }); test('can insert a thing', () => { return globalDatabase.insert('thing', makeThing(), response => { expect(response.success).toBeTruthy(); }); }); ``` Here the `afterAll` ensures that `cleanUpDatabase` is called after all tests run. If `afterAll` is inside a `describe` block, it runs at the end of the describe block. If you want to run some cleanup after every test instead of after all tests, use `afterEach` instead. ### `afterEach(fn, timeout)` Runs a function after each one of the tests in this file completes. If the function returns a promise or is a generator, Jest waits for that promise to resolve before continuing. Optionally, you can provide a `timeout` (in milliseconds) for specifying how long to wait before aborting. *Note: The default timeout is 5 seconds.* This is often useful if you want to clean up some temporary state that is created by each test. For example: ``` const globalDatabase = makeGlobalDatabase(); function cleanUpDatabase(db) { db.cleanUp(); } afterEach(() => { cleanUpDatabase(globalDatabase); }); test('can find things', () => { return globalDatabase.find('thing', {}, results => { expect(results.length).toBeGreaterThan(0); }); }); test('can insert a thing', () => { return globalDatabase.insert('thing', makeThing(), response => { expect(response.success).toBeTruthy(); }); }); ``` Here the `afterEach` ensures that `cleanUpDatabase` is called after each test runs. If `afterEach` is inside a `describe` block, it only runs after the tests that are inside this describe block. If you want to run some cleanup just once, after all of the tests run, use `afterAll` instead. ### `beforeAll(fn, timeout)` Runs a function before any of the tests in this file run. If the function returns a promise or is a generator, Jest waits for that promise to resolve before running tests. Optionally, you can provide a `timeout` (in milliseconds) for specifying how long to wait before aborting. *Note: The default timeout is 5 seconds.* This is often useful if you want to set up some global state that will be used by many tests. For example: ``` const globalDatabase = makeGlobalDatabase(); beforeAll(() => { // Clears the database and adds some testing data. // Jest will wait for this promise to resolve before running tests. return globalDatabase.clear().then(() => { return globalDatabase.insert({testData: 'foo'}); }); }); // Since we only set up the database once in this example, it's important // that our tests don't modify it. test('can find things', () => { return globalDatabase.find('thing', {}, results => { expect(results.length).toBeGreaterThan(0); }); }); ``` Here the `beforeAll` ensures that the database is set up before tests run. If setup was synchronous, you could do this without `beforeAll`. The key is that Jest will wait for a promise to resolve, so you can have asynchronous setup as well. If `beforeAll` is inside a `describe` block, it runs at the beginning of the describe block. If you want to run something before every test instead of before any test runs, use `beforeEach` instead. ### `beforeEach(fn, timeout)` Runs a function before each of the tests in this file runs. If the function returns a promise or is a generator, Jest waits for that promise to resolve before running the test. Optionally, you can provide a `timeout` (in milliseconds) for specifying how long to wait before aborting. *Note: The default timeout is 5 seconds.* This is often useful if you want to reset some global state that will be used by many tests. For example: ``` const globalDatabase = makeGlobalDatabase(); beforeEach(() => { // Clears the database and adds some testing data. // Jest will wait for this promise to resolve before running tests. return globalDatabase.clear().then(() => { return globalDatabase.insert({testData: 'foo'}); }); }); test('can find things', () => { return globalDatabase.find('thing', {}, results => { expect(results.length).toBeGreaterThan(0); }); }); test('can insert a thing', () => { return globalDatabase.insert('thing', makeThing(), response => { expect(response.success).toBeTruthy(); }); }); ``` Here the `beforeEach` ensures that the database is reset for each test. If `beforeEach` is inside a `describe` block, it runs for each test in the describe block. If you only need to run some setup code once, before any tests run, use `beforeAll` instead. ### `describe(name, fn)` `describe(name, fn)` creates a block that groups together several related tests. For example, if you have a `myBeverage` object that is supposed to be delicious but not sour, you could test it with: ``` const myBeverage = { delicious: true, sour: false, }; describe('my beverage', () => { test('is delicious', () => { expect(myBeverage.delicious).toBeTruthy(); }); test('is not sour', () => { expect(myBeverage.sour).toBeFalsy(); }); }); ``` This isn't required - you can write the `test` blocks directly at the top level. But this can be handy if you prefer your tests to be organized into groups. You can also nest `describe` blocks if you have a hierarchy of tests: ``` const binaryStringToNumber = binString => { if (!/^[01]+$/.test(binString)) { throw new CustomError('Not a binary number.'); } return parseInt(binString, 2); }; describe('binaryStringToNumber', () => { describe('given an invalid binary string', () => { test('composed of non-numbers throws CustomError', () => { expect(() => binaryStringToNumber('abc')).toThrowError(CustomError); }); test('with extra whitespace throws CustomError', () => { expect(() => binaryStringToNumber(' 100')).toThrowError(CustomError); }); }); describe('given a valid binary string', () => { test('returns the correct number', () => { expect(binaryStringToNumber('100')).toBe(4); }); }); }); ``` ### `describe.each(table)(name, fn, timeout)` Use `describe.each` if you keep duplicating the same test suites with different data. `describe.each` allows you to write the test suite once and pass data in. `describe.each` is available with two APIs: #### 1. `describe.each(table)(name, fn, timeout)` * `table`: `Array` of Arrays with the arguments that are passed into the `fn` for each row. + *Note* If you pass in a 1D array of primitives, internally it will be mapped to a table i.e. `[1, 2, 3] -> [[1], [2], [3]]` * `name`: `String` the title of the test suite. + Generate unique test titles by positionally injecting parameters with [`printf` formatting](https://nodejs.org/api/util.html#util_util_format_format_args): - `%p` - [pretty-format](https://www.npmjs.com/package/pretty-format). - `%s`- String. - `%d`- Number. - `%i` - Integer. - `%f` - Floating point value. - `%j` - JSON. - `%o` - Object. - `%#` - Index of the test case. - `%%` - single percent sign ('%'). This does not consume an argument. + Or generate unique test titles by injecting properties of test case object with `$variable` - To inject nested object values use you can supply a keyPath i.e. `$variable.path.to.value` - You can use `$#` to inject the index of the test case - You cannot use `$variable` with the `printf` formatting except for `%%` * `fn`: `Function` the suite of tests to be ran, this is the function that will receive the parameters in each row as function arguments. * Optionally, you can provide a `timeout` (in milliseconds) for specifying how long to wait for each row before aborting. *Note: The default timeout is 5 seconds.* Example: ``` describe.each([ [1, 1, 2], [1, 2, 3], [2, 1, 3], ])('.add(%i, %i)', (a, b, expected) => { test(`returns ${expected}`, () => { expect(a + b).toBe(expected); }); test(`returned value not be greater than ${expected}`, () => { expect(a + b).not.toBeGreaterThan(expected); }); test(`returned value not be less than ${expected}`, () => { expect(a + b).not.toBeLessThan(expected); }); }); ``` ``` describe.each([ {a: 1, b: 1, expected: 2}, {a: 1, b: 2, expected: 3}, {a: 2, b: 1, expected: 3}, ])('.add($a, $b)', ({a, b, expected}) => { test(`returns ${expected}`, () => { expect(a + b).toBe(expected); }); test(`returned value not be greater than ${expected}`, () => { expect(a + b).not.toBeGreaterThan(expected); }); test(`returned value not be less than ${expected}`, () => { expect(a + b).not.toBeLessThan(expected); }); }); ``` #### 2. `describe.each`table`(name, fn, timeout)` * `table`: `Tagged Template Literal` + First row of variable name column headings separated with `|` + One or more subsequent rows of data supplied as template literal expressions using `${value}` syntax. * `name`: `String` the title of the test suite, use `$variable` to inject test data into the suite title from the tagged template expressions, and `$#` for the index of the row. + To inject nested object values use you can supply a keyPath i.e. `$variable.path.to.value` * `fn`: `Function` the suite of tests to be ran, this is the function that will receive the test data object. * Optionally, you can provide a `timeout` (in milliseconds) for specifying how long to wait for each row before aborting. *Note: The default timeout is 5 seconds.* Example: ``` describe.each` a | b | expected ${1} | ${1} | ${2} ${1} | ${2} | ${3} ${2} | ${1} | ${3} `('$a + $b', ({a, b, expected}) => { test(`returns ${expected}`, () => { expect(a + b).toBe(expected); }); test(`returned value not be greater than ${expected}`, () => { expect(a + b).not.toBeGreaterThan(expected); }); test(`returned value not be less than ${expected}`, () => { expect(a + b).not.toBeLessThan(expected); }); }); ``` ### `describe.only(name, fn)` Also under the alias: `fdescribe(name, fn)` You can use `describe.only` if you want to run only one describe block: ``` describe.only('my beverage', () => { test('is delicious', () => { expect(myBeverage.delicious).toBeTruthy(); }); test('is not sour', () => { expect(myBeverage.sour).toBeFalsy(); }); }); describe('my other beverage', () => { // ... will be skipped }); ``` ### `describe.only.each(table)(name, fn)` Also under the aliases: `fdescribe.each(table)(name, fn)` and `fdescribe.each`table`(name, fn)` Use `describe.only.each` if you want to only run specific tests suites of data driven tests. `describe.only.each` is available with two APIs: #### `describe.only.each(table)(name, fn)` ``` describe.only.each([ [1, 1, 2], [1, 2, 3], [2, 1, 3], ])('.add(%i, %i)', (a, b, expected) => { test(`returns ${expected}`, () => { expect(a + b).toBe(expected); }); }); test('will not be ran', () => { expect(1 / 0).toBe(Infinity); }); ``` #### `describe.only.each`table`(name, fn)` ``` describe.only.each` a | b | expected ${1} | ${1} | ${2} ${1} | ${2} | ${3} ${2} | ${1} | ${3} `('returns $expected when $a is added $b', ({a, b, expected}) => { test('passes', () => { expect(a + b).toBe(expected); }); }); test('will not be ran', () => { expect(1 / 0).toBe(Infinity); }); ``` ### `describe.skip(name, fn)` Also under the alias: `xdescribe(name, fn)` You can use `describe.skip` if you do not want to run the tests of a particular `describe` block: ``` describe('my beverage', () => { test('is delicious', () => { expect(myBeverage.delicious).toBeTruthy(); }); test('is not sour', () => { expect(myBeverage.sour).toBeFalsy(); }); }); describe.skip('my other beverage', () => { // ... will be skipped }); ``` Using `describe.skip` is often a cleaner alternative to temporarily commenting out a chunk of tests. Beware that the `describe` block will still run. If you have some setup that also should be skipped, do it in a `beforeAll` or `beforeEach` block. ### `describe.skip.each(table)(name, fn)` Also under the aliases: `xdescribe.each(table)(name, fn)` and `xdescribe.each`table`(name, fn)` Use `describe.skip.each` if you want to stop running a suite of data driven tests. `describe.skip.each` is available with two APIs: #### `describe.skip.each(table)(name, fn)` ``` describe.skip.each([ [1, 1, 2], [1, 2, 3], [2, 1, 3], ])('.add(%i, %i)', (a, b, expected) => { test(`returns ${expected}`, () => { expect(a + b).toBe(expected); // will not be ran }); }); test('will be ran', () => { expect(1 / 0).toBe(Infinity); }); ``` #### `describe.skip.each`table`(name, fn)` ``` describe.skip.each` a | b | expected ${1} | ${1} | ${2} ${1} | ${2} | ${3} ${2} | ${1} | ${3} `('returns $expected when $a is added $b', ({a, b, expected}) => { test('will not be ran', () => { expect(a + b).toBe(expected); // will not be ran }); }); test('will be ran', () => { expect(1 / 0).toBe(Infinity); }); ``` ### `test(name, fn, timeout)` Also under the alias: `it(name, fn, timeout)` All you need in a test file is the `test` method which runs a test. For example, let's say there's a function `inchesOfRain()` that should be zero. Your whole test could be: ``` test('did not rain', () => { expect(inchesOfRain()).toBe(0); }); ``` The first argument is the test name; the second argument is a function that contains the expectations to test. The third argument (optional) is `timeout` (in milliseconds) for specifying how long to wait before aborting. *Note: The default timeout is 5 seconds.* > Note: If a **promise is returned** from `test`, Jest will wait for the promise to resolve before letting the test complete. Jest will also wait if you **provide an argument to the test function**, usually called `done`. This could be handy when you want to test callbacks. See how to test async code [here](asynchronous#callbacks). > > For example, let's say `fetchBeverageList()` returns a promise that is supposed to resolve to a list that has `lemon` in it. You can test this with: ``` test('has lemon in it', () => { return fetchBeverageList().then(list => { expect(list).toContain('lemon'); }); }); ``` Even though the call to `test` will return right away, the test doesn't complete until the promise resolves as well. ### `test.concurrent(name, fn, timeout)` Also under the alias: `it.concurrent(name, fn, timeout)` Use `test.concurrent` if you want the test to run concurrently. > Note: `test.concurrent` is considered experimental - see [here](https://github.com/facebook/jest/labels/Area%3A%20Concurrent) for details on missing features and other issues > > The first argument is the test name; the second argument is an asynchronous function that contains the expectations to test. The third argument (optional) is `timeout` (in milliseconds) for specifying how long to wait before aborting. *Note: The default timeout is 5 seconds.* ``` test.concurrent('addition of 2 numbers', async () => { expect(5 + 3).toBe(8); }); test.concurrent('subtraction 2 numbers', async () => { expect(5 - 3).toBe(2); }); ``` > Note: Use `maxConcurrency` in configuration to prevents Jest from executing more than the specified amount of tests at the same time > > ### `test.concurrent.each(table)(name, fn, timeout)` Also under the alias: `it.concurrent.each(table)(name, fn, timeout)` Use `test.concurrent.each` if you keep duplicating the same test with different data. `test.each` allows you to write the test once and pass data in, the tests are all run asynchronously. `test.concurrent.each` is available with two APIs: #### 1. `test.concurrent.each(table)(name, fn, timeout)` * `table`: `Array` of Arrays with the arguments that are passed into the test `fn` for each row. + *Note* If you pass in a 1D array of primitives, internally it will be mapped to a table i.e. `[1, 2, 3] -> [[1], [2], [3]]` * `name`: `String` the title of the test block. + Generate unique test titles by positionally injecting parameters with [`printf` formatting](https://nodejs.org/api/util.html#util_util_format_format_args): - `%p` - [pretty-format](https://www.npmjs.com/package/pretty-format). - `%s`- String. - `%d`- Number. - `%i` - Integer. - `%f` - Floating point value. - `%j` - JSON. - `%o` - Object. - `%#` - Index of the test case. - `%%` - single percent sign ('%'). This does not consume an argument. * `fn`: `Function` the test to be ran, this is the function that will receive the parameters in each row as function arguments, **this will have to be an asynchronous function**. * Optionally, you can provide a `timeout` (in milliseconds) for specifying how long to wait for each row before aborting. *Note: The default timeout is 5 seconds.* Example: ``` test.concurrent.each([ [1, 1, 2], [1, 2, 3], [2, 1, 3], ])('.add(%i, %i)', async (a, b, expected) => { expect(a + b).toBe(expected); }); ``` #### 2. `test.concurrent.each`table`(name, fn, timeout)` * `table`: `Tagged Template Literal` + First row of variable name column headings separated with `|` + One or more subsequent rows of data supplied as template literal expressions using `${value}` syntax. * `name`: `String` the title of the test, use `$variable` to inject test data into the test title from the tagged template expressions. + To inject nested object values use you can supply a keyPath i.e. `$variable.path.to.value` * `fn`: `Function` the test to be ran, this is the function that will receive the test data object, **this will have to be an asynchronous function**. * Optionally, you can provide a `timeout` (in milliseconds) for specifying how long to wait for each row before aborting. *Note: The default timeout is 5 seconds.* Example: ``` test.concurrent.each` a | b | expected ${1} | ${1} | ${2} ${1} | ${2} | ${3} ${2} | ${1} | ${3} `('returns $expected when $a is added $b', async ({a, b, expected}) => { expect(a + b).toBe(expected); }); ``` ### `test.concurrent.only.each(table)(name, fn)` Also under the alias: `it.concurrent.only.each(table)(name, fn)` Use `test.concurrent.only.each` if you want to only run specific tests with different test data concurrently. `test.concurrent.only.each` is available with two APIs: #### `test.concurrent.only.each(table)(name, fn)` ``` test.concurrent.only.each([ [1, 1, 2], [1, 2, 3], [2, 1, 3], ])('.add(%i, %i)', async (a, b, expected) => { expect(a + b).toBe(expected); }); test('will not be ran', () => { expect(1 / 0).toBe(Infinity); }); ``` #### `test.only.each`table`(name, fn)` ``` test.concurrent.only.each` a | b | expected ${1} | ${1} | ${2} ${1} | ${2} | ${3} ${2} | ${1} | ${3} `('returns $expected when $a is added $b', async ({a, b, expected}) => { expect(a + b).toBe(expected); }); test('will not be ran', () => { expect(1 / 0).toBe(Infinity); }); ``` ### `test.concurrent.skip.each(table)(name, fn)` Also under the alias: `it.concurrent.skip.each(table)(name, fn)` Use `test.concurrent.skip.each` if you want to stop running a collection of asynchronous data driven tests. `test.concurrent.skip.each` is available with two APIs: #### `test.concurrent.skip.each(table)(name, fn)` ``` test.concurrent.skip.each([ [1, 1, 2], [1, 2, 3], [2, 1, 3], ])('.add(%i, %i)', async (a, b, expected) => { expect(a + b).toBe(expected); // will not be ran }); test('will be ran', () => { expect(1 / 0).toBe(Infinity); }); ``` #### `test.concurrent.skip.each`table`(name, fn)` ``` test.concurrent.skip.each` a | b | expected ${1} | ${1} | ${2} ${1} | ${2} | ${3} ${2} | ${1} | ${3} `('returns $expected when $a is added $b', async ({a, b, expected}) => { expect(a + b).toBe(expected); // will not be ran }); test('will be ran', () => { expect(1 / 0).toBe(Infinity); }); ``` ### `test.each(table)(name, fn, timeout)` Also under the alias: `it.each(table)(name, fn)` and `it.each`table`(name, fn)` Use `test.each` if you keep duplicating the same test with different data. `test.each` allows you to write the test once and pass data in. `test.each` is available with two APIs: #### 1. `test.each(table)(name, fn, timeout)` * `table`: `Array` of Arrays with the arguments that are passed into the test `fn` for each row. + *Note* If you pass in a 1D array of primitives, internally it will be mapped to a table i.e. `[1, 2, 3] -> [[1], [2], [3]]` * `name`: `String` the title of the test block. + Generate unique test titles by positionally injecting parameters with [`printf` formatting](https://nodejs.org/api/util.html#util_util_format_format_args): - `%p` - [pretty-format](https://www.npmjs.com/package/pretty-format). - `%s`- String. - `%d`- Number. - `%i` - Integer. - `%f` - Floating point value. - `%j` - JSON. - `%o` - Object. - `%#` - Index of the test case. - `%%` - single percent sign ('%'). This does not consume an argument. + Or generate unique test titles by injecting properties of test case object with `$variable` - To inject nested object values use you can supply a keyPath i.e. `$variable.path.to.value` - You can use `$#` to inject the index of the test case - You cannot use `$variable` with the `printf` formatting except for `%%` * `fn`: `Function` the test to be ran, this is the function that will receive the parameters in each row as function arguments. * Optionally, you can provide a `timeout` (in milliseconds) for specifying how long to wait for each row before aborting. *Note: The default timeout is 5 seconds.* Example: ``` test.each([ [1, 1, 2], [1, 2, 3], [2, 1, 3], ])('.add(%i, %i)', (a, b, expected) => { expect(a + b).toBe(expected); }); ``` ``` test.each([ {a: 1, b: 1, expected: 2}, {a: 1, b: 2, expected: 3}, {a: 2, b: 1, expected: 3}, ])('.add($a, $b)', ({a, b, expected}) => { expect(a + b).toBe(expected); }); ``` #### 2. `test.each`table`(name, fn, timeout)` * `table`: `Tagged Template Literal` + First row of variable name column headings separated with `|` + One or more subsequent rows of data supplied as template literal expressions using `${value}` syntax. * `name`: `String` the title of the test, use `$variable` to inject test data into the test title from the tagged template expressions. + To inject nested object values use you can supply a keyPath i.e. `$variable.path.to.value` * `fn`: `Function` the test to be ran, this is the function that will receive the test data object. * Optionally, you can provide a `timeout` (in milliseconds) for specifying how long to wait for each row before aborting. *Note: The default timeout is 5 seconds.* Example: ``` test.each` a | b | expected ${1} | ${1} | ${2} ${1} | ${2} | ${3} ${2} | ${1} | ${3} `('returns $expected when $a is added $b', ({a, b, expected}) => { expect(a + b).toBe(expected); }); ``` ### `test.failing(name, fn, timeout)` Also under the alias: `it.failing(name, fn, timeout)` note This is only available with the default [jest-circus](https://github.com/facebook/jest/tree/main/packages/jest-circus) runner. Use `test.failing` when you are writing a test and expecting it to fail. These tests will behave the other way normal tests do. If `failing` test will throw any errors then it will pass. If it does not throw it will fail. tip You can use this type of tests i.e. when writing code in a BDD way. In that case the tests will not show up as failing until they pass. Then you can just remove the `failing` modifier to make them pass. It can also be a nice way to contribute failing tests to a project, even if you don't know how to fix the bug. Example: ``` test.failing('it is not equal', () => { expect(5).toBe(6); // this test will pass }); test.failing('it is equal', () => { expect(10).toBe(10); // this test will fail }); ``` ### `test.failing.each(name, fn, timeout)` Also under the alias: `it.failing.each(table)(name, fn)` and `it.failing.each`table`(name, fn)` note This is only available with the default [jest-circus](https://github.com/facebook/jest/tree/main/packages/jest-circus) runner. You can also run multiple tests at once by adding `each` after `failing`. Example: ``` test.failing.each([ {a: 1, b: 1, expected: 2}, {a: 1, b: 2, expected: 3}, {a: 2, b: 1, expected: 3}, ])('.add($a, $b)', ({a, b, expected}) => { expect(a + b).toBe(expected); }); ``` ### `test.only.failing(name, fn, timeout)` Also under the aliases: `it.only.failing(name, fn, timeout)`, `fit.failing(name, fn, timeout)` note This is only available with the default [jest-circus](https://github.com/facebook/jest/tree/main/packages/jest-circus) runner. Use `test.only.failing` if you want to only run a specific failing test. ### `test.skip.failing(name, fn, timeout)` Also under the aliases: `it.skip.failing(name, fn, timeout)`, `xit.failing(name, fn, timeout)`, `xtest.failing(name, fn, timeout)` note This is only available with the default [jest-circus](https://github.com/facebook/jest/tree/main/packages/jest-circus) runner. Use `test.skip.failing` if you want to skip running a specific failing test. ### `test.only(name, fn, timeout)` Also under the aliases: `it.only(name, fn, timeout)`, and `fit(name, fn, timeout)` When you are debugging a large test file, you will often only want to run a subset of tests. You can use `.only` to specify which tests are the only ones you want to run in that test file. Optionally, you can provide a `timeout` (in milliseconds) for specifying how long to wait before aborting. *Note: The default timeout is 5 seconds.* For example, let's say you had these tests: ``` test.only('it is raining', () => { expect(inchesOfRain()).toBeGreaterThan(0); }); test('it is not snowing', () => { expect(inchesOfSnow()).toBe(0); }); ``` Only the "it is raining" test will run in that test file, since it is run with `test.only`. Usually you wouldn't check code using `test.only` into source control - you would use it for debugging, and remove it once you have fixed the broken tests. ### `test.only.each(table)(name, fn)` Also under the aliases: `it.only.each(table)(name, fn)`, `fit.each(table)(name, fn)`, `it.only.each`table`(name, fn)` and `fit.each`table`(name, fn)` Use `test.only.each` if you want to only run specific tests with different test data. `test.only.each` is available with two APIs: #### `test.only.each(table)(name, fn)` ``` test.only.each([ [1, 1, 2], [1, 2, 3], [2, 1, 3], ])('.add(%i, %i)', (a, b, expected) => { expect(a + b).toBe(expected); }); test('will not be ran', () => { expect(1 / 0).toBe(Infinity); }); ``` #### `test.only.each`table`(name, fn)` ``` test.only.each` a | b | expected ${1} | ${1} | ${2} ${1} | ${2} | ${3} ${2} | ${1} | ${3} `('returns $expected when $a is added $b', ({a, b, expected}) => { expect(a + b).toBe(expected); }); test('will not be ran', () => { expect(1 / 0).toBe(Infinity); }); ``` ### `test.skip(name, fn)` Also under the aliases: `it.skip(name, fn)`, `xit(name, fn)`, and `xtest(name, fn)` When you are maintaining a large codebase, you may sometimes find a test that is temporarily broken for some reason. If you want to skip running this test, but you don't want to delete this code, you can use `test.skip` to specify some tests to skip. For example, let's say you had these tests: ``` test('it is raining', () => { expect(inchesOfRain()).toBeGreaterThan(0); }); test.skip('it is not snowing', () => { expect(inchesOfSnow()).toBe(0); }); ``` Only the "it is raining" test will run, since the other test is run with `test.skip`. You could comment the test out, but it's often a bit nicer to use `test.skip` because it will maintain indentation and syntax highlighting. ### `test.skip.each(table)(name, fn)` Also under the aliases: `it.skip.each(table)(name, fn)`, `xit.each(table)(name, fn)`, `xtest.each(table)(name, fn)`, `it.skip.each`table`(name, fn)`, `xit.each`table`(name, fn)` and `xtest.each`table`(name, fn)` Use `test.skip.each` if you want to stop running a collection of data driven tests. `test.skip.each` is available with two APIs: #### `test.skip.each(table)(name, fn)` ``` test.skip.each([ [1, 1, 2], [1, 2, 3], [2, 1, 3], ])('.add(%i, %i)', (a, b, expected) => { expect(a + b).toBe(expected); // will not be ran }); test('will be ran', () => { expect(1 / 0).toBe(Infinity); }); ``` #### `test.skip.each`table`(name, fn)` ``` test.skip.each` a | b | expected ${1} | ${1} | ${2} ${1} | ${2} | ${3} ${2} | ${1} | ${3} `('returns $expected when $a is added $b', ({a, b, expected}) => { expect(a + b).toBe(expected); // will not be ran }); test('will be ran', () => { expect(1 / 0).toBe(Infinity); }); ``` ### `test.todo(name)` Also under the alias: `it.todo(name)` Use `test.todo` when you are planning on writing tests. These tests will be highlighted in the summary output at the end so you know how many tests you still need todo. *Note*: If you supply a test callback function then the `test.todo` will throw an error. If you have already implemented the test and it is broken and you do not want it to run, then use `test.skip` instead. #### API * `name`: `String` the title of the test plan. Example: ``` const add = (a, b) => a + b; test.todo('add should be associative'); ``` TypeScript Usage ---------------- info These TypeScript usage tips and caveats are only applicable if you import from `'@jest/globals'`: ``` import {describe, test} from '@jest/globals'; ``` ### `.each` The `.each` modifier offers few different ways to define a table of the test cases. Some of the APIs have caveats related with the type inference of the arguments which are passed to `describe` or `test` callback functions. Let's take a look at each of them. note For simplicity `test.each` is picked for the examples, but the type inference is identical in all cases where `.each` modifier can be used: `describe.each`, `test.concurrent.only.each`, `test.skip.each`, etc. #### Array of objects The array of objects API is most verbose, but it makes the type inference a painless task. A `table` can be inlined: ``` test.each([ {name: 'a', path: 'path/to/a', count: 1, write: true}, {name: 'b', path: 'path/to/b', count: 3}, ])('inline table', ({name, path, count, write}) => { // arguments are typed as expected, e.g. `write: boolean | undefined` }); ``` Or declared separately as a variable: ``` const table = [ {a: 1, b: 2, expected: 'three', extra: true}, {a: 3, b: 4, expected: 'seven', extra: false}, {a: 5, b: 6, expected: 'eleven'}, ]; test.each(table)('table as a variable', ({a, b, expected, extra}) => { // again everything is typed as expected, e.g. `extra: boolean | undefined` }); ``` #### Array of arrays The array of arrays style will work smoothly with inlined tables: ``` test.each([ [1, 2, 'three', true], [3, 4, 'seven', false], [5, 6, 'eleven'], ])('inline table example', (a, b, expected, extra) => { // arguments are typed as expected, e.g. `extra: boolean | undefined` }); ``` However, if a table is declared as a separate variable, it must be typed as an array of tuples for correct type inference (this is not needed only if all elements of a row are of the same type): ``` const table: Array<[number, number, string, boolean?]> = [ [1, 2, 'three', true], [3, 4, 'seven', false], [5, 6, 'eleven'], ]; test.each(table)('table as a variable example', (a, b, expected, extra) => { // without the annotation types are incorrect, e.g. `a: number | string | boolean` }); ``` #### Template literal If all values are of the same type, the template literal API will type the arguments correctly: ``` test.each` a | b | expected ${1} | ${2} | ${3} ${3} | ${4} | ${7} ${5} | ${6} | ${11} `('template literal example', ({a, b, expected}) => { // all arguments are of type `number` }); ``` Otherwise it will require a generic type argument: ``` test.each<{a: number; b: number; expected: string; extra?: boolean}>` a | b | expected | extra ${1} | ${2} | ${'three'} | ${true} ${3} | ${4} | ${'seven'} | ${false} ${5} | ${6} | ${'eleven'} `('template literal example', ({a, b, expected, extra}) => { // without the generic argument in this case types would default to `unknown` }); ```
programming_docs
jest Using with webpack Using with webpack ================== Jest can be used in projects that use [webpack](https://webpack.js.org/) to manage assets, styles, and compilation. webpack *does* offer some unique challenges over other tools because it integrates directly with your application to allow managing stylesheets, assets like images and fonts, along with the expansive ecosystem of compile-to-JavaScript languages and tools. A webpack example ----------------- Let's start with a common sort of webpack config file and translate it to a Jest setup. ``` module.exports = { module: { loaders: [ {exclude: ['node_modules'], loader: 'babel', test: /\.jsx?$/}, {loader: 'style-loader!css-loader', test: /\.css$/}, {loader: 'url-loader', test: /\.gif$/}, {loader: 'file-loader', test: /\.(ttf|eot|svg)$/}, ], }, resolve: { alias: { config$: './configs/app-config.js', react: './vendor/react-master', }, extensions: ['', 'js', 'jsx'], modules: [ 'node_modules', 'bower_components', 'shared', '/shared/vendor/modules', ], }, }; ``` webpack.config.js If you have JavaScript files that are transformed by Babel, you can [enable support for Babel](getting-started#using-babel) by installing the `babel-jest` plugin. Non-Babel JavaScript transformations can be handled with Jest's [`transform`](configuration#transform-objectstring-pathtotransformer--pathtotransformer-object) config option. ### Handling Static Assets Next, let's configure Jest to gracefully handle asset files such as stylesheets and images. Usually, these files aren't particularly useful in tests so we can safely mock them out. However, if you are using CSS Modules then it's better to mock a proxy for your className lookups. ``` { "jest": { "moduleNameMapper": { "\\.(jpg|jpeg|png|gif|eot|otf|webp|svg|ttf|woff|woff2|mp4|webm|wav|mp3|m4a|aac|oga)$": "<rootDir>/__mocks__/fileMock.js", "\\.(css|less)$": "<rootDir>/__mocks__/styleMock.js" } } } ``` package.json And the mock files themselves: ``` module.exports = {}; ``` \_\_mocks\_\_/styleMock.js ``` module.exports = 'test-file-stub'; ``` \_\_mocks\_\_/fileMock.js ### Mocking CSS Modules You can use an [ES6 Proxy](https://github.com/keyanzhang/identity-obj-proxy) to mock [CSS Modules](https://github.com/css-modules/css-modules): * npm * Yarn ``` npm install --save-dev identity-obj-proxy ``` ``` yarn add --dev identity-obj-proxy ``` Then all your className lookups on the styles object will be returned as-is (e.g., `styles.foobar === 'foobar'`). This is pretty handy for React [Snapshot Testing](snapshot-testing). ``` { "jest": { "moduleNameMapper": { "\\.(jpg|jpeg|png|gif|eot|otf|webp|svg|ttf|woff|woff2|mp4|webm|wav|mp3|m4a|aac|oga)$": "<rootDir>/__mocks__/fileMock.js", "\\.(css|less)$": "identity-obj-proxy" } } } ``` package.json (for CSS Modules) If `moduleNameMapper` cannot fulfill your requirements, you can use Jest's [`transform`](configuration#transform-objectstring-pathtotransformer--pathtotransformer-object) config option to specify how assets are transformed. For example, a transformer that returns the basename of a file (such that `require('logo.jpg');` returns `'logo'`) can be written as: ``` const path = require('path'); module.exports = { process(sourceText, sourcePath, options) { return { code: `module.exports = ${JSON.stringify(path.basename(sourcePath))};`, }; }, }; ``` fileTransformer.js ``` { "jest": { "moduleNameMapper": { "\\.(css|less)$": "identity-obj-proxy" }, "transform": { "\\.(jpg|jpeg|png|gif|eot|otf|webp|svg|ttf|woff|woff2|mp4|webm|wav|mp3|m4a|aac|oga)$": "<rootDir>/fileTransformer.js" } } } ``` package.json (for custom transformers and CSS Modules) We've told Jest to ignore files matching a stylesheet or image extension, and instead, require our mock files. You can adjust the regular expression to match the file types your webpack config handles. tip Remember to include the default `babel-jest` transformer explicitly, if you wish to use it alongside with additional code preprocessors: ``` "transform": { "\\.[jt]sx?$": "babel-jest", "\\.css$": "some-css-transformer", } ``` ### Configuring Jest to find our files Now that Jest knows how to process our files, we need to tell it how to *find* them. For webpack's `modulesDirectories`, and `extensions` options there are direct analogs in Jest's `moduleDirectories` and `moduleFileExtensions` options. ``` { "jest": { "moduleFileExtensions": ["js", "jsx"], "moduleDirectories": ["node_modules", "bower_components", "shared"], "moduleNameMapper": { "\\.(css|less)$": "<rootDir>/__mocks__/styleMock.js", "\\.(gif|ttf|eot|svg)$": "<rootDir>/__mocks__/fileMock.js" } } } ``` package.json > Note: `<rootDir>` is a special token that gets replaced by Jest with the root of your project. Most of the time this will be the folder where your `package.json` is located unless you specify a custom `rootDir` option in your configuration. > > Similarly, webpack's `resolve.root` option functions like setting the `NODE_PATH` env variable, which you can set, or make use of the `modulePaths` option. ``` { "jest": { "modulePaths": ["/shared/vendor/modules"], "moduleFileExtensions": ["js", "jsx"], "moduleDirectories": ["node_modules", "bower_components", "shared"], "moduleNameMapper": { "\\.(css|less)$": "<rootDir>/__mocks__/styleMock.js", "\\.(gif|ttf|eot|svg)$": "<rootDir>/__mocks__/fileMock.js" } } } ``` package.json And finally, we have to handle the webpack `alias`. For that, we can make use of the `moduleNameMapper` option again. ``` { "jest": { "modulePaths": ["/shared/vendor/modules"], "moduleFileExtensions": ["js", "jsx"], "moduleDirectories": ["node_modules", "bower_components", "shared"], "moduleNameMapper": { "\\.(css|less)$": "<rootDir>/__mocks__/styleMock.js", "\\.(gif|ttf|eot|svg)$": "<rootDir>/__mocks__/fileMock.js", "^react(.*)$": "<rootDir>/vendor/react-master$1", "^config$": "<rootDir>/configs/app-config.js" } } } ``` package.json That's it! webpack is a complex and flexible tool, so you may have to make some adjustments to handle your specific application's needs. Luckily for most projects, Jest should be more than flexible enough to handle your webpack config. > Note: For more complex webpack configurations, you may also want to investigate projects such as: [babel-plugin-webpack-loaders](https://github.com/istarkov/babel-plugin-webpack-loaders). > > Using with webpack 2 -------------------- webpack 2 offers native support for ES modules. However, Jest runs in Node, and thus requires ES modules to be transpiled to CommonJS modules. As such, if you are using webpack 2, you most likely will want to configure Babel to transpile ES modules to CommonJS modules only in the `test` environment. ``` { "presets": [["env", {"modules": false}]], "env": { "test": { "plugins": ["transform-es2015-modules-commonjs"] } } } ``` .babelrc > Note: Jest caches files to speed up test execution. If you updated .babelrc and Jest is still not working, try running Jest with `--no-cache`. > > If you use dynamic imports (`import('some-file.js').then(module => ...)`), you need to enable the `dynamic-import-node` plugin. ``` { "presets": [["env", {"modules": false}]], "plugins": ["syntax-dynamic-import"], "env": { "test": { "plugins": ["dynamic-import-node"] } } } ``` .babelrc For an example of how to use Jest with webpack with React, Redux, and Node, you can view one [here](https://github.com/jenniferabowd/jest_react_redux_node_webpack_complex_example). jest Using with MongoDB Using with MongoDB ================== With the [Global Setup/Teardown](configuration#globalsetup-string) and [Async Test Environment](configuration#testenvironment-string) APIs, Jest can work smoothly with [MongoDB](https://www.mongodb.com/). Use jest-mongodb Preset ----------------------- [Jest MongoDB](https://github.com/shelfio/jest-mongodb) provides all required configuration to run your tests using MongoDB. 1. First install `@shelf/jest-mongodb` * npm * Yarn ``` npm install --save-dev @shelf/jest-mongodb ``` ``` yarn add --dev @shelf/jest-mongodb ``` 2. Specify preset in your Jest configuration: ``` { "preset": "@shelf/jest-mongodb" } ``` 3. Write your test ``` const {MongoClient} = require('mongodb'); describe('insert', () => { let connection; let db; beforeAll(async () => { connection = await MongoClient.connect(globalThis.__MONGO_URI__, { useNewUrlParser: true, useUnifiedTopology: true, }); db = await connection.db(globalThis.__MONGO_DB_NAME__); }); afterAll(async () => { await connection.close(); }); it('should insert a doc into collection', async () => { const users = db.collection('users'); const mockUser = {_id: 'some-user-id', name: 'John'}; await users.insertOne(mockUser); const insertedUser = await users.findOne({_id: 'some-user-id'}); expect(insertedUser).toEqual(mockUser); }); }); ``` There's no need to load any dependencies. See [documentation](https://github.com/shelfio/jest-mongodb) for details (configuring MongoDB version, etc). jest Testing Web Frameworks Testing Web Frameworks ====================== Jest is a universal testing platform, with the ability to adapt to any JavaScript library or framework. In this section, we'd like to link to community posts and articles about integrating Jest into popular JS libraries. React ----- * [Testing ReactJS components with Jest](https://testing-library.com/docs/react-testing-library/example-intro) by Kent C. Dodds ([@kentcdodds](https://twitter.com/kentcdodds)) Vue.js ------ * [Testing Vue.js components with Jest](https://alexjoverm.github.io/series/Unit-Testing-Vue-js-Components-with-the-Official-Vue-Testing-Tools-and-Jest/) by Alex Jover Morales ([@alexjoverm](https://twitter.com/alexjoverm)) * [Jest for all: Episode 1 — Vue.js](https://medium.com/@kentaromiura_the_js_guy/jest-for-all-episode-1-vue-js-d616bccbe186#.d573vrce2) by Cristian Carlesso ([@kentaromiura](https://twitter.com/kentaromiura)) AngularJS --------- * [Testing an AngularJS app with Jest](https://medium.com/aya-experience/testing-an-angularjs-app-with-jest-3029a613251) by Matthieu Lux ([@Swiip](https://twitter.com/Swiip)) * [Running AngularJS Tests with Jest](https://engineering.talentpair.com/running-angularjs-tests-with-jest-49d0cc9c6d26) by Ben Brandt ([@benjaminbrandt](https://twitter.com/benjaminbrandt)) * [AngularJS Unit Tests with Jest Actions (Traditional Chinese)](https://dwatow.github.io/2019/08-14-angularjs/angular-jest/?fbclid=IwAR2SrqYg_o6uvCQ79FdNPeOxs86dUqB6pPKgd9BgnHt1kuIDRyRM-ch11xg) by Chris Wang ([@dwatow](https://github.com/dwatow)) Angular ------- * [Testing Angular faster with Jest](https://www.xfive.co/blog/testing-angular-faster-jest/) by Michał Pierzchała ([@thymikee](https://twitter.com/thymikee)) MobX ---- * [How to Test React and MobX with Jest](https://semaphoreci.com/community/tutorials/how-to-test-react-and-mobx-with-jest) by Will Stern ([@willsterndev](https://twitter.com/willsterndev)) Redux ----- * [Writing Tests](https://redux.js.org/recipes/writing-tests) by Redux docs Express.js ---------- * [How to test Express.js with Jest and Supertest](http://www.albertgao.xyz/2017/05/24/how-to-test-expressjs-with-jest-and-supertest/) by Albert Gao ([@albertgao](https://twitter.com/albertgao)) GatsbyJS -------- * [Unit Testing](https://www.gatsbyjs.org/docs/unit-testing/) by GatsbyJS docs Hapi.js ------- * [Testing Hapi.js With Jest](https://github.com/sivasankars/testing-hapi.js-with-jest) by Niralar Next.js ------- * [Jest and React Testing Library](https://nextjs.org/docs/testing#jest-and-react-testing-library) by Next.js docs jest ES6 Class Mocks ES6 Class Mocks =============== Jest can be used to mock ES6 classes that are imported into files you want to test. ES6 classes are constructor functions with some syntactic sugar. Therefore, any mock for an ES6 class must be a function or an actual ES6 class (which is, again, another function). So you can mock them using [mock functions](mock-functions). An ES6 Class Example -------------------- We'll use a contrived example of a class that plays sound files, `SoundPlayer`, and a consumer class which uses that class, `SoundPlayerConsumer`. We'll mock `SoundPlayer` in our tests for `SoundPlayerConsumer`. ``` export default class SoundPlayer { constructor() { this.foo = 'bar'; } playSoundFile(fileName) { console.log('Playing sound file ' + fileName); } } ``` sound-player.js ``` import SoundPlayer from './sound-player'; export default class SoundPlayerConsumer { constructor() { this.soundPlayer = new SoundPlayer(); } playSomethingCool() { const coolSoundFileName = 'song.mp3'; this.soundPlayer.playSoundFile(coolSoundFileName); } } ``` sound-player-consumer.js The 4 ways to create an ES6 class mock -------------------------------------- ### Automatic mock Calling `jest.mock('./sound-player')` returns a useful "automatic mock" you can use to spy on calls to the class constructor and all of its methods. It replaces the ES6 class with a mock constructor, and replaces all of its methods with [mock functions](mock-functions) that always return `undefined`. Method calls are saved in `theAutomaticMock.mock.instances[index].methodName.mock.calls`. Please note that if you use arrow functions in your classes, they will *not* be part of the mock. The reason for that is that arrow functions are not present on the object's prototype, they are merely properties holding a reference to a function. If you don't need to replace the implementation of the class, this is the easiest option to set up. For example: ``` import SoundPlayer from './sound-player'; import SoundPlayerConsumer from './sound-player-consumer'; jest.mock('./sound-player'); // SoundPlayer is now a mock constructor beforeEach(() => { // Clear all instances and calls to constructor and all methods: SoundPlayer.mockClear(); }); it('We can check if the consumer called the class constructor', () => { const soundPlayerConsumer = new SoundPlayerConsumer(); expect(SoundPlayer).toHaveBeenCalledTimes(1); }); it('We can check if the consumer called a method on the class instance', () => { // Show that mockClear() is working: expect(SoundPlayer).not.toHaveBeenCalled(); const soundPlayerConsumer = new SoundPlayerConsumer(); // Constructor should have been called again: expect(SoundPlayer).toHaveBeenCalledTimes(1); const coolSoundFileName = 'song.mp3'; soundPlayerConsumer.playSomethingCool(); // mock.instances is available with automatic mocks: const mockSoundPlayerInstance = SoundPlayer.mock.instances[0]; const mockPlaySoundFile = mockSoundPlayerInstance.playSoundFile; expect(mockPlaySoundFile.mock.calls[0][0]).toEqual(coolSoundFileName); // Equivalent to above check: expect(mockPlaySoundFile).toHaveBeenCalledWith(coolSoundFileName); expect(mockPlaySoundFile).toHaveBeenCalledTimes(1); }); ``` ### Manual mock Create a [manual mock](manual-mocks) by saving a mock implementation in the `__mocks__` folder. This allows you to specify the implementation, and it can be used across test files. ``` // Import this named export into your test file: export const mockPlaySoundFile = jest.fn(); const mock = jest.fn().mockImplementation(() => { return {playSoundFile: mockPlaySoundFile}; }); export default mock; ``` \_\_mocks\_\_/sound-player.js Import the mock and the mock method shared by all instances: ``` import SoundPlayer, {mockPlaySoundFile} from './sound-player'; import SoundPlayerConsumer from './sound-player-consumer'; jest.mock('./sound-player'); // SoundPlayer is now a mock constructor beforeEach(() => { // Clear all instances and calls to constructor and all methods: SoundPlayer.mockClear(); mockPlaySoundFile.mockClear(); }); it('We can check if the consumer called the class constructor', () => { const soundPlayerConsumer = new SoundPlayerConsumer(); expect(SoundPlayer).toHaveBeenCalledTimes(1); }); it('We can check if the consumer called a method on the class instance', () => { const soundPlayerConsumer = new SoundPlayerConsumer(); const coolSoundFileName = 'song.mp3'; soundPlayerConsumer.playSomethingCool(); expect(mockPlaySoundFile).toHaveBeenCalledWith(coolSoundFileName); }); ``` sound-player-consumer.test.js ### Calling [`jest.mock()`](jest-object#jestmockmodulename-factory-options) with the module factory parameter `jest.mock(path, moduleFactory)` takes a **module factory** argument. A module factory is a function that returns the mock. In order to mock a constructor function, the module factory must return a constructor function. In other words, the module factory must be a function that returns a function - a higher-order function (HOF). ``` import SoundPlayer from './sound-player'; const mockPlaySoundFile = jest.fn(); jest.mock('./sound-player', () => { return jest.fn().mockImplementation(() => { return {playSoundFile: mockPlaySoundFile}; }); }); ``` caution Since calls to `jest.mock()` are hoisted to the top of the file, Jest prevents access to out-of-scope variables. By default, you cannot first define a variable and then use it in the factory. Jest will disable this check for variables that start with the word `mock`. However, it is still up to you to guarantee that they will be initialized on time. Be aware of [Temporal Dead Zone](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/let#temporal_dead_zone_tdz). For example, the following will throw an out-of-scope error due to the use of `fake` instead of `mock` in the variable declaration. ``` // Note: this will fail import SoundPlayer from './sound-player'; const fakePlaySoundFile = jest.fn(); jest.mock('./sound-player', () => { return jest.fn().mockImplementation(() => { return {playSoundFile: fakePlaySoundFile}; }); }); ``` The following will throw a `ReferenceError` despite using `mock` in the variable declaration, as the `mockSoundPlayer` is not wrapped in an arrow function and thus accessed before initialization after hoisting. ``` import SoundPlayer from './sound-player'; const mockSoundPlayer = jest.fn().mockImplementation(() => { return {playSoundFile: mockPlaySoundFile}; }); // results in a ReferenceError jest.mock('./sound-player', () => { return mockSoundPlayer; }); ``` ### Replacing the mock using [`mockImplementation()`](mock-function-api#mockfnmockimplementationfn) or [`mockImplementationOnce()`](mock-function-api#mockfnmockimplementationoncefn) You can replace all of the above mocks in order to change the implementation, for a single test or all tests, by calling `mockImplementation()` on the existing mock. Calls to jest.mock are hoisted to the top of the code. You can specify a mock later, e.g. in `beforeAll()`, by calling `mockImplementation()` (or `mockImplementationOnce()`) on the existing mock instead of using the factory parameter. This also allows you to change the mock between tests, if needed: ``` import SoundPlayer from './sound-player'; import SoundPlayerConsumer from './sound-player-consumer'; jest.mock('./sound-player'); describe('When SoundPlayer throws an error', () => { beforeAll(() => { SoundPlayer.mockImplementation(() => { return { playSoundFile: () => { throw new Error('Test error'); }, }; }); }); it('Should throw an error when calling playSomethingCool', () => { const soundPlayerConsumer = new SoundPlayerConsumer(); expect(() => soundPlayerConsumer.playSomethingCool()).toThrow(); }); }); ``` In depth: Understanding mock constructor functions -------------------------------------------------- Building your constructor function mock using `jest.fn().mockImplementation()` makes mocks appear more complicated than they really are. This section shows how you can create your own mocks to illustrate how mocking works. ### Manual mock that is another ES6 class If you define an ES6 class using the same filename as the mocked class in the `__mocks__` folder, it will serve as the mock. This class will be used in place of the real class. This allows you to inject a test implementation for the class, but does not provide a way to spy on calls. For the contrived example, the mock might look like this: ``` export default class SoundPlayer { constructor() { console.log('Mock SoundPlayer: constructor was called'); } playSoundFile() { console.log('Mock SoundPlayer: playSoundFile was called'); } } ``` \_\_mocks\_\_/sound-player.js ### Mock using module factory parameter The module factory function passed to `jest.mock(path, moduleFactory)` can be a HOF that returns a function\*. This will allow calling `new` on the mock. Again, this allows you to inject different behavior for testing, but does not provide a way to spy on calls. #### \* Module factory function must return a function In order to mock a constructor function, the module factory must return a constructor function. In other words, the module factory must be a function that returns a function - a higher-order function (HOF). ``` jest.mock('./sound-player', () => { return function () { return {playSoundFile: () => {}}; }; }); ``` ***Note: Arrow functions won't work*** Note that the mock can't be an arrow function because calling `new` on an arrow function is not allowed in JavaScript. So this won't work: ``` jest.mock('./sound-player', () => { return () => { // Does not work; arrow functions can't be called with new return {playSoundFile: () => {}}; }; }); ``` This will throw ***TypeError: \_soundPlayer2.default is not a constructor***, unless the code is transpiled to ES5, e.g. by `@babel/preset-env`. (ES5 doesn't have arrow functions nor classes, so both will be transpiled to plain functions.) Mocking a specific method of a class ------------------------------------ Lets say that you want to mock or spy the method `playSoundFile` within the class `SoundPlayer`. A simple example: ``` // your jest test file below import SoundPlayer from './sound-player'; import SoundPlayerConsumer from './sound-player-consumer'; const playSoundFileMock = jest .spyOn(SoundPlayer.prototype, 'playSoundFile') .mockImplementation(() => { console.log('mocked function'); }); // comment this line if just want to "spy" it('player consumer plays music', () => { const player = new SoundPlayerConsumer(); player.playSomethingCool(); expect(playSoundFileMock).toHaveBeenCalled(); }); ``` ### Static, getter and setter methods Lets imagine our class `SoundPlayer` has a getter method `foo` and a static method `brand` ``` export default class SoundPlayer { constructor() { this.foo = 'bar'; } playSoundFile(fileName) { console.log('Playing sound file ' + fileName); } get foo() { return 'bar'; } static brand() { return 'player-brand'; } } ``` You can mock/spy them easily, here is an example: ``` // your jest test file below import SoundPlayer from './sound-player'; import SoundPlayerConsumer from './sound-player-consumer'; const staticMethodMock = jest .spyOn(SoundPlayer, 'brand') .mockImplementation(() => 'some-mocked-brand'); const getterMethodMock = jest .spyOn(SoundPlayer.prototype, 'foo', 'get') .mockImplementation(() => 'some-mocked-result'); it('custom methods are called', () => { const player = new SoundPlayer(); const foo = player.foo; const brand = SoundPlayer.brand(); expect(staticMethodMock).toHaveBeenCalled(); expect(getterMethodMock).toHaveBeenCalled(); }); ``` Keeping track of usage (spying on the mock) ------------------------------------------- Injecting a test implementation is helpful, but you will probably also want to test whether the class constructor and methods are called with the correct parameters. ### Spying on the constructor In order to track calls to the constructor, replace the function returned by the HOF with a Jest mock function. Create it with [`jest.fn()`](jest-object#jestfnimplementation), and then specify its implementation with `mockImplementation()`. ``` import SoundPlayer from './sound-player'; jest.mock('./sound-player', () => { // Works and lets you check for constructor calls: return jest.fn().mockImplementation(() => { return {playSoundFile: () => {}}; }); }); ``` This will let us inspect usage of our mocked class, using `SoundPlayer.mock.calls`: `expect(SoundPlayer).toHaveBeenCalled();` or near-equivalent: `expect(SoundPlayer.mock.calls.length).toEqual(1);` ### Mocking non-default class exports If the class is **not** the default export from the module then you need to return an object with the key that is the same as the class export name. ``` import {SoundPlayer} from './sound-player'; jest.mock('./sound-player', () => { // Works and lets you check for constructor calls: return { SoundPlayer: jest.fn().mockImplementation(() => { return {playSoundFile: () => {}}; }), }; }); ``` ### Spying on methods of our class Our mocked class will need to provide any member functions (`playSoundFile` in the example) that will be called during our tests, or else we'll get an error for calling a function that doesn't exist. But we'll probably want to also spy on calls to those methods, to ensure that they were called with the expected parameters. A new object will be created each time the mock constructor function is called during tests. To spy on method calls in all of these objects, we populate `playSoundFile` with another mock function, and store a reference to that same mock function in our test file, so it's available during tests. ``` import SoundPlayer from './sound-player'; const mockPlaySoundFile = jest.fn(); jest.mock('./sound-player', () => { return jest.fn().mockImplementation(() => { return {playSoundFile: mockPlaySoundFile}; // Now we can track calls to playSoundFile }); }); ``` The manual mock equivalent of this would be: ``` // Import this named export into your test file export const mockPlaySoundFile = jest.fn(); const mock = jest.fn().mockImplementation(() => { return {playSoundFile: mockPlaySoundFile}; }); export default mock; ``` \_\_mocks\_\_/sound-player.js Usage is similar to the module factory function, except that you can omit the second argument from `jest.mock()`, and you must import the mocked method into your test file, since it is no longer defined there. Use the original module path for this; don't include `__mocks__`. ### Cleaning up between tests To clear the record of calls to the mock constructor function and its methods, we call [`mockClear()`](mock-function-api#mockfnmockclear) in the `beforeEach()` function: ``` beforeEach(() => { SoundPlayer.mockClear(); mockPlaySoundFile.mockClear(); }); ``` Complete example ---------------- Here's a complete test file which uses the module factory parameter to `jest.mock`: ``` import SoundPlayer from './sound-player'; import SoundPlayerConsumer from './sound-player-consumer'; const mockPlaySoundFile = jest.fn(); jest.mock('./sound-player', () => { return jest.fn().mockImplementation(() => { return {playSoundFile: mockPlaySoundFile}; }); }); beforeEach(() => { SoundPlayer.mockClear(); mockPlaySoundFile.mockClear(); }); it('The consumer should be able to call new() on SoundPlayer', () => { const soundPlayerConsumer = new SoundPlayerConsumer(); // Ensure constructor created the object: expect(soundPlayerConsumer).toBeTruthy(); }); it('We can check if the consumer called the class constructor', () => { const soundPlayerConsumer = new SoundPlayerConsumer(); expect(SoundPlayer).toHaveBeenCalledTimes(1); }); it('We can check if the consumer called a method on the class instance', () => { const soundPlayerConsumer = new SoundPlayerConsumer(); const coolSoundFileName = 'song.mp3'; soundPlayerConsumer.playSomethingCool(); expect(mockPlaySoundFile.mock.calls[0][0]).toEqual(coolSoundFileName); }); ``` sound-player-consumer.test.js
programming_docs
jest Watch Plugins Watch Plugins ============= The Jest watch plugin system provides a way to hook into specific parts of Jest and to define watch mode menu prompts that execute code on key press. Combined, these features allow you to develop interactive experiences custom for your workflow. Watch Plugin Interface ---------------------- ``` class MyWatchPlugin { // Add hooks to Jest lifecycle events apply(jestHooks) {} // Get the prompt information for interactive plugins getUsageInfo(globalConfig) {} // Executed when the key from `getUsageInfo` is input run(globalConfig, updateConfigAndRun) {} } ``` Hooking into Jest ----------------- To connect your watch plugin to Jest, add its path under `watchPlugins` in your Jest configuration: ``` module.exports = { // ... watchPlugins: ['path/to/yourWatchPlugin'], }; ``` jest.config.js Custom watch plugins can add hooks to Jest events. These hooks can be added either with or without having an interactive key in the watch mode menu. ### `apply(jestHooks)` Jest hooks can be attached by implementing the `apply` method. This method receives a `jestHooks` argument that allows the plugin to hook into specific parts of the lifecycle of a test run. ``` class MyWatchPlugin { apply(jestHooks) {} } ``` Below are the hooks available in Jest. #### `jestHooks.shouldRunTestSuite(testSuiteInfo)` Returns a boolean (or `Promise<boolean>` for handling asynchronous operations) to specify if a test should be run or not. For example: ``` class MyWatchPlugin { apply(jestHooks) { jestHooks.shouldRunTestSuite(testSuiteInfo => { return testSuiteInfo.testPath.includes('my-keyword'); }); // or a promise jestHooks.shouldRunTestSuite(testSuiteInfo => { return Promise.resolve(testSuiteInfo.testPath.includes('my-keyword')); }); } } ``` #### `jestHooks.onTestRunComplete(results)` Gets called at the end of every test run. It has the test results as an argument. For example: ``` class MyWatchPlugin { apply(jestHooks) { jestHooks.onTestRunComplete(results => { this._hasSnapshotFailure = results.snapshot.failure; }); } } ``` #### `jestHooks.onFileChange({projects})` Gets called whenever there is a change in the file system * `projects: Array<config: ProjectConfig, testPaths: Array<string>`: Includes all the test paths that Jest is watching. For example: ``` class MyWatchPlugin { apply(jestHooks) { jestHooks.onFileChange(({projects}) => { this._projects = projects; }); } } ``` Watch Menu Integration ---------------------- Custom watch plugins can also add or override functionality to the watch menu by specifying a key/prompt pair in `getUsageInfo` method and a `run` method for the execution of the key. ### `getUsageInfo(globalConfig)` To add a key to the watch menu, implement the `getUsageInfo` method, returning a key and the prompt: ``` class MyWatchPlugin { getUsageInfo(globalConfig) { return { key: 's', prompt: 'do something', }; } } ``` This will add a line in the watch mode menu *(`› Press s to do something.`)* ``` Watch Usage › Press p to filter by a filename regex pattern. › Press t to filter by a test name regex pattern. › Press q to quit watch mode. › Press s to do something. // <-- This is our plugin › Press Enter to trigger a test run. ``` **Note**: If the key for your plugin already exists as a default key, your plugin will override that key. ### `run(globalConfig, updateConfigAndRun)` To handle key press events from the key returned by `getUsageInfo`, you can implement the `run` method. This method returns a `Promise<boolean>` that can be resolved when the plugin wants to return control to Jest. The `boolean` specifies if Jest should rerun the tests after it gets the control back. * `globalConfig`: A representation of Jest's current global configuration * `updateConfigAndRun`: Allows you to trigger a test run while the interactive plugin is running. ``` class MyWatchPlugin { run(globalConfig, updateConfigAndRun) { // do something. } } ``` **Note**: If you do call `updateConfigAndRun`, your `run` method should not resolve to a truthy value, as that would trigger a double-run. #### Authorized configuration keys For stability and safety reasons, only part of the global configuration keys can be updated with `updateConfigAndRun`. The current white list is as follows: * [`bail`](configuration#bail-number--boolean) * [`changedSince`](cli#--changedsince) * [`collectCoverage`](configuration#collectcoverage-boolean) * [`collectCoverageFrom`](configuration#collectcoveragefrom-array) * [`coverageDirectory`](configuration#coveragedirectory-string) * [`coverageReporters`](configuration#coveragereporters-arraystring) * [`notify`](configuration#notify-boolean) * [`notifyMode`](configuration#notifymode-string) * [`onlyFailures`](configuration#onlyfailures-boolean) * [`reporters`](configuration#reporters-arraymodulename--modulename-options) * [`testNamePattern`](cli#--testnamepatternregex) * [`testPathPattern`](cli#--testpathpatternregex) * [`updateSnapshot`](cli#--updatesnapshot) * [`verbose`](configuration#verbose-boolean) Customization ------------- Plugins can be customized via your Jest configuration. ``` module.exports = { // ... watchPlugins: [ [ 'path/to/yourWatchPlugin', { key: 'k', // <- your custom key prompt: 'show a custom prompt', }, ], ], }; ``` jest.config.js Recommended config names: * `key`: Modifies the plugin key. * `prompt`: Allows user to customize the text in the plugin prompt. If the user provided a custom configuration, it will be passed as an argument to the plugin constructor. ``` class MyWatchPlugin { constructor({config}) {} } ``` Choosing a good key ------------------- Jest allows third-party plugins to override some of its built-in feature keys, but not all. Specifically, the following keys are **not overwritable** : * `c` (clears filter patterns) * `i` (updates non-matching snapshots interactively) * `q` (quits) * `u` (updates all non-matching snapshots) * `w` (displays watch mode usage / available actions) The following keys for built-in functionality **can be overwritten** : * `p` (test filename pattern) * `t` (test name pattern) Any key not used by built-in functionality can be claimed, as you would expect. Try to avoid using keys that are difficult to obtain on various keyboards (e.g. `é`, `€`), or not visible by default (e.g. many Mac keyboards do not have visual hints for characters such as `|`, `\`, `[`, etc.) ### When a conflict happens Should your plugin attempt to overwrite a reserved key, Jest will error out with a descriptive message, something like: > Watch plugin YourFaultyPlugin attempted to register key `q`, that is reserved internally for quitting watch mode. Please change the configuration key for this plugin. > > Third-party plugins are also forbidden to overwrite a key reserved already by another third-party plugin present earlier in the configured plugins list (`watchPlugins` array setting). When this happens, you’ll also get an error message that tries to help you fix that: > Watch plugins YourFaultyPlugin and TheirFaultyPlugin both attempted to register key `x`. Please change the key configuration for one of the conflicting plugins to avoid overlap. > > jest Timer Mocks Timer Mocks =========== The native timer functions (i.e., `setTimeout()`, `setInterval()`, `clearTimeout()`, `clearInterval()`) are less than ideal for a testing environment since they depend on real time to elapse. Jest can swap out timers with functions that allow you to control the passage of time. [Great Scott!](https://www.youtube.com/watch?v=QZoJ2Pt27BY) info Also see [Fake Timers API](jest-object#fake-timers) documentation. Enable Fake Timers ------------------ In the following example we enable fake timers by calling `jest.useFakeTimers()`. This is replacing the original implementation of `setTimeout()` and other timer functions. Timers can be restored to their normal behavior with `jest.useRealTimers()`. ``` function timerGame(callback) { console.log('Ready....go!'); setTimeout(() => { console.log("Time's up -- stop!"); callback && callback(); }, 1000); } module.exports = timerGame; ``` timerGame.js ``` jest.useFakeTimers(); jest.spyOn(global, 'setTimeout'); test('waits 1 second before ending the game', () => { const timerGame = require('../timerGame'); timerGame(); expect(setTimeout).toHaveBeenCalledTimes(1); expect(setTimeout).toHaveBeenLastCalledWith(expect.any(Function), 1000); }); ``` \_\_tests\_\_/timerGame-test.js Run All Timers -------------- Another test we might want to write for this module is one that asserts that the callback is called after 1 second. To do this, we're going to use Jest's timer control APIs to fast-forward time right in the middle of the test: ``` jest.useFakeTimers(); test('calls the callback after 1 second', () => { const timerGame = require('../timerGame'); const callback = jest.fn(); timerGame(callback); // At this point in time, the callback should not have been called yet expect(callback).not.toBeCalled(); // Fast-forward until all timers have been executed jest.runAllTimers(); // Now our callback should have been called! expect(callback).toBeCalled(); expect(callback).toHaveBeenCalledTimes(1); }); ``` Run Pending Timers ------------------ There are also scenarios where you might have a recursive timer – that is a timer that sets a new timer in its own callback. For these, running all the timers would be an endless loop, throwing the following error: "Aborting after running 100000 timers, assuming an infinite loop!" If that is your case, using `jest.runOnlyPendingTimers()` will solve the problem: ``` function infiniteTimerGame(callback) { console.log('Ready....go!'); setTimeout(() => { console.log("Time's up! 10 seconds before the next game starts..."); callback && callback(); // Schedule the next game in 10 seconds setTimeout(() => { infiniteTimerGame(callback); }, 10000); }, 1000); } module.exports = infiniteTimerGame; ``` infiniteTimerGame.js ``` jest.useFakeTimers(); jest.spyOn(global, 'setTimeout'); describe('infiniteTimerGame', () => { test('schedules a 10-second timer after 1 second', () => { const infiniteTimerGame = require('../infiniteTimerGame'); const callback = jest.fn(); infiniteTimerGame(callback); // At this point in time, there should have been a single call to // setTimeout to schedule the end of the game in 1 second. expect(setTimeout).toHaveBeenCalledTimes(1); expect(setTimeout).toHaveBeenLastCalledWith(expect.any(Function), 1000); // Fast forward and exhaust only currently pending timers // (but not any new timers that get created during that process) jest.runOnlyPendingTimers(); // At this point, our 1-second timer should have fired its callback expect(callback).toBeCalled(); // And it should have created a new timer to start the game over in // 10 seconds expect(setTimeout).toHaveBeenCalledTimes(2); expect(setTimeout).toHaveBeenLastCalledWith(expect.any(Function), 10000); }); }); ``` \_\_tests\_\_/infiniteTimerGame-test.js note For debugging or any other reason you can change the limit of timers that will be run before throwing an error: ``` jest.useFakeTimers({timerLimit: 100}); ``` Advance Timers by Time ---------------------- Another possibility is use `jest.advanceTimersByTime(msToRun)`. When this API is called, all timers are advanced by `msToRun` milliseconds. All pending "macro-tasks" that have been queued via setTimeout() or setInterval(), and would be executed during this time frame, will be executed. Additionally, if those macro-tasks schedule new macro-tasks that would be executed within the same time frame, those will be executed until there are no more macro-tasks remaining in the queue that should be run within msToRun milliseconds. ``` function timerGame(callback) { console.log('Ready....go!'); setTimeout(() => { console.log("Time's up -- stop!"); callback && callback(); }, 1000); } module.exports = timerGame; ``` timerGame.js ``` jest.useFakeTimers(); it('calls the callback after 1 second via advanceTimersByTime', () => { const timerGame = require('../timerGame'); const callback = jest.fn(); timerGame(callback); // At this point in time, the callback should not have been called yet expect(callback).not.toBeCalled(); // Fast-forward until all timers have been executed jest.advanceTimersByTime(1000); // Now our callback should have been called! expect(callback).toBeCalled(); expect(callback).toHaveBeenCalledTimes(1); }); ``` \_\_tests\_\_/timerGame-test.js Lastly, it may occasionally be useful in some tests to be able to clear all of the pending timers. For this, we have `jest.clearAllTimers()`. Selective Faking ---------------- Sometimes your code may require to avoid overwriting the original implementation of one or another API. If that is the case, you can use `doNotFake` option. For example, here is how you could provide a custom mock function for `performance.mark()` in jsdom environment: ``` /** * @jest-environment jsdom */ const mockPerformanceMark = jest.fn(); window.performance.mark = mockPerformanceMark; test('allows mocking `performance.mark()`', () => { jest.useFakeTimers({doNotFake: ['performance']}); expect(window.performance.mark).toBe(mockPerformanceMark); }); ``` jest Mock Functions Mock Functions ============== Mock functions are also known as "spies", because they let you spy on the behavior of a function that is called indirectly by some other code, rather than only testing the output. You can create a mock function with `jest.fn()`. If no implementation is given, the mock function will return `undefined` when invoked. info The TypeScript examples from this page will only work as documented if you import `jest` from `'@jest/globals'`: ``` import {jest} from '@jest/globals'; ``` Methods ------- * [Reference](#reference) + [`mockFn.getMockName()`](#mockfngetmockname) + [`mockFn.mock.calls`](#mockfnmockcalls) + [`mockFn.mock.results`](#mockfnmockresults) + [`mockFn.mock.instances`](#mockfnmockinstances) + [`mockFn.mock.contexts`](#mockfnmockcontexts) + [`mockFn.mock.lastCall`](#mockfnmocklastcall) + [`mockFn.mockClear()`](#mockfnmockclear) + [`mockFn.mockReset()`](#mockfnmockreset) + [`mockFn.mockRestore()`](#mockfnmockrestore) + [`mockFn.mockImplementation(fn)`](#mockfnmockimplementationfn) + [`mockFn.mockImplementationOnce(fn)`](#mockfnmockimplementationoncefn) + [`mockFn.mockName(name)`](#mockfnmocknamename) + [`mockFn.mockReturnThis()`](#mockfnmockreturnthis) + [`mockFn.mockReturnValue(value)`](#mockfnmockreturnvaluevalue) + [`mockFn.mockReturnValueOnce(value)`](#mockfnmockreturnvalueoncevalue) + [`mockFn.mockResolvedValue(value)`](#mockfnmockresolvedvaluevalue) + [`mockFn.mockResolvedValueOnce(value)`](#mockfnmockresolvedvalueoncevalue) + [`mockFn.mockRejectedValue(value)`](#mockfnmockrejectedvaluevalue) + [`mockFn.mockRejectedValueOnce(value)`](#mockfnmockrejectedvalueoncevalue) * [TypeScript Usage](#typescript-usage) + [`jest.fn(implementation?)`](#jestfnimplementation) + [`jest.Mocked<Source>`](#jestmockedsource) + [`jest.mocked(source, options?)`](#jestmockedsource-options) Reference --------- ### `mockFn.getMockName()` Returns the mock name string set by calling `mockFn.mockName(value)`. ### `mockFn.mock.calls` An array containing the call arguments of all calls that have been made to this mock function. Each item in the array is an array of arguments that were passed during the call. For example: A mock function `f` that has been called twice, with the arguments `f('arg1', 'arg2')`, and then with the arguments `f('arg3', 'arg4')`, would have a `mock.calls` array that looks like this: ``` [ ['arg1', 'arg2'], ['arg3', 'arg4'], ]; ``` ### `mockFn.mock.results` An array containing the results of all calls that have been made to this mock function. Each entry in this array is an object containing a `type` property, and a `value` property. `type` will be one of the following: * `'return'` - Indicates that the call completed by returning normally. * `'throw'` - Indicates that the call completed by throwing a value. * `'incomplete'` - Indicates that the call has not yet completed. This occurs if you test the result from within the mock function itself, or from within a function that was called by the mock. The `value` property contains the value that was thrown or returned. `value` is undefined when `type === 'incomplete'`. For example: A mock function `f` that has been called three times, returning `'result1'`, throwing an error, and then returning `'result2'`, would have a `mock.results` array that looks like this: ``` [ { type: 'return', value: 'result1', }, { type: 'throw', value: { /* Error instance */ }, }, { type: 'return', value: 'result2', }, ]; ``` ### `mockFn.mock.instances` An array that contains all the object instances that have been instantiated from this mock function using `new`. For example: A mock function that has been instantiated twice would have the following `mock.instances` array: ``` const mockFn = jest.fn(); const a = new mockFn(); const b = new mockFn(); mockFn.mock.instances[0] === a; // true mockFn.mock.instances[1] === b; // true ``` ### `mockFn.mock.contexts` An array that contains the contexts for all calls of the mock function. A context is the `this` value that a function receives when called. The context can be set using `Function.prototype.bind`, `Function.prototype.call` or `Function.prototype.apply`. For example: ``` const mockFn = jest.fn(); const boundMockFn = mockFn.bind(thisContext0); boundMockFn('a', 'b'); mockFn.call(thisContext1, 'a', 'b'); mockFn.apply(thisContext2, ['a', 'b']); mockFn.mock.contexts[0] === thisContext0; // true mockFn.mock.contexts[1] === thisContext1; // true mockFn.mock.contexts[2] === thisContext2; // true ``` ### `mockFn.mock.lastCall` An array containing the call arguments of the last call that was made to this mock function. If the function was not called, it will return `undefined`. For example: A mock function `f` that has been called twice, with the arguments `f('arg1', 'arg2')`, and then with the arguments `f('arg3', 'arg4')`, would have a `mock.lastCall` array that looks like this: ``` ['arg3', 'arg4']; ``` ### `mockFn.mockClear()` Clears all information stored in the [`mockFn.mock.calls`](#mockfnmockcalls), [`mockFn.mock.instances`](#mockfnmockinstances), [`mockFn.mock.contexts`](#mockfnmockcontexts) and [`mockFn.mock.results`](#mockfnmockresults) arrays. Often this is useful when you want to clean up a mocks usage data between two assertions. Beware that `mockFn.mockClear()` will replace `mockFn.mock`, not just reset the values of its properties! You should, therefore, avoid assigning `mockFn.mock` to other variables, temporary or not, to make sure you don't access stale data. The [`clearMocks`](../configuration#clearmocks-boolean) configuration option is available to clear mocks automatically before each tests. ### `mockFn.mockReset()` Does everything that [`mockFn.mockClear()`](#mockfnmockclear) does, and also removes any mocked return values or implementations. This is useful when you want to completely reset a *mock* back to its initial state. (Note that resetting a *spy* will result in a function with no return value). The [`mockReset`](../configuration#resetmocks-boolean) configuration option is available to reset mocks automatically before each test. ### `mockFn.mockRestore()` Does everything that [`mockFn.mockReset()`](#mockfnmockreset) does, and also restores the original (non-mocked) implementation. This is useful when you want to mock functions in certain test cases and restore the original implementation in others. Beware that `mockFn.mockRestore()` only works when the mock was created with `jest.spyOn()`. Thus you have to take care of restoration yourself when manually assigning `jest.fn()`. The [`restoreMocks`](../configuration#restoremocks-boolean) configuration option is available to restore mocks automatically before each test. ### `mockFn.mockImplementation(fn)` Accepts a function that should be used as the implementation of the mock. The mock itself will still record all calls that go into and instances that come from itself – the only difference is that the implementation will also be executed when the mock is called. tip `jest.fn(implementation)` is a shorthand for `jest.fn().mockImplementation(implementation)`. * JavaScript * TypeScript ``` const mockFn = jest.fn(scalar => 42 + scalar); mockFn(0); // 42 mockFn(1); // 43 mockFn.mockImplementation(scalar => 36 + scalar); mockFn(2); // 38 mockFn(3); // 39 ``` ``` const mockFn = jest.fn((scalar: number) => 42 + scalar); mockFn(0); // 42 mockFn(1); // 43 mockFn.mockImplementation(scalar => 36 + scalar); mockFn(2); // 38 mockFn(3); // 39 ``` `.mockImplementation()` can also be used to mock class constructors: * JavaScript * TypeScript ``` module.exports = class SomeClass { method(a, b) {} }; ``` SomeClass.js ``` const SomeClass = require('./SomeClass'); jest.mock('./SomeClass'); // this happens automatically with automocking const mockMethod = jest.fn(); SomeClass.mockImplementation(() => { return { method: mockMethod, }; }); const some = new SomeClass(); some.method('a', 'b'); console.log('Calls to method: ', mockMethod.mock.calls); ``` SomeClass.test.js ``` export class SomeClass { method(a: string, b: string): void {} } ``` SomeClass.ts ``` import {SomeClass} from './SomeClass'; jest.mock('./SomeClass'); // this happens automatically with automocking const mockMethod = jest.fn<(a: string, b: string) => void>(); SomeClass.mockImplementation(() => { return { method: mockMethod, }; }); const some = new SomeClass(); some.method('a', 'b'); console.log('Calls to method: ', mockMethod.mock.calls); ``` SomeClass.test.ts ### `mockFn.mockImplementationOnce(fn)` Accepts a function that will be used as an implementation of the mock for one call to the mocked function. Can be chained so that multiple function calls produce different results. * JavaScript * TypeScript ``` const mockFn = jest .fn() .mockImplementationOnce(cb => cb(null, true)) .mockImplementationOnce(cb => cb(null, false)); mockFn((err, val) => console.log(val)); // true mockFn((err, val) => console.log(val)); // false ``` ``` const mockFn = jest .fn<(cb: (a: null, b: boolean) => void) => void>() .mockImplementationOnce(cb => cb(null, true)) .mockImplementationOnce(cb => cb(null, false)); mockFn((err, val) => console.log(val)); // true mockFn((err, val) => console.log(val)); // false ``` When the mocked function runs out of implementations defined with `.mockImplementationOnce()`, it will execute the default implementation set with `jest.fn(() => defaultValue)` or `.mockImplementation(() => defaultValue)` if they were called: ``` const mockFn = jest .fn(() => 'default') .mockImplementationOnce(() => 'first call') .mockImplementationOnce(() => 'second call'); mockFn(); // 'first call' mockFn(); // 'second call' mockFn(); // 'default' mockFn(); // 'default' ``` ### `mockFn.mockName(name)` Accepts a string to use in test result output in place of `'jest.fn()'` to indicate which mock function is being referenced. For example: ``` const mockFn = jest.fn().mockName('mockedFunction'); // mockFn(); expect(mockFn).toHaveBeenCalled(); ``` Will result in this error: ``` expect(mockedFunction).toHaveBeenCalled() Expected mock function "mockedFunction" to have been called, but it was not called. ``` ### `mockFn.mockReturnThis()` Syntactic sugar function for: ``` jest.fn(function () { return this; }); ``` ### `mockFn.mockReturnValue(value)` Accepts a value that will be returned whenever the mock function is called. * JavaScript * TypeScript ``` const mock = jest.fn(); mock.mockReturnValue(42); mock(); // 42 mock.mockReturnValue(43); mock(); // 43 ``` ``` const mock = jest.fn<() => number>(); mock.mockReturnValue(42); mock(); // 42 mock.mockReturnValue(43); mock(); // 43 ``` ### `mockFn.mockReturnValueOnce(value)` Accepts a value that will be returned for one call to the mock function. Can be chained so that successive calls to the mock function return different values. When there are no more `mockReturnValueOnce` values to use, calls will return a value specified by `mockReturnValue`. * JavaScript * TypeScript ``` const mockFn = jest .fn() .mockReturnValue('default') .mockReturnValueOnce('first call') .mockReturnValueOnce('second call'); mockFn(); // 'first call' mockFn(); // 'second call' mockFn(); // 'default' mockFn(); // 'default' ``` ``` const mockFn = jest .fn<() => string>() .mockReturnValue('default') .mockReturnValueOnce('first call') .mockReturnValueOnce('second call'); mockFn(); // 'first call' mockFn(); // 'second call' mockFn(); // 'default' mockFn(); // 'default' ``` ### `mockFn.mockResolvedValue(value)` Syntactic sugar function for: ``` jest.fn().mockImplementation(() => Promise.resolve(value)); ``` Useful to mock async functions in async tests: * JavaScript * TypeScript ``` test('async test', async () => { const asyncMock = jest.fn().mockResolvedValue(43); await asyncMock(); // 43 }); ``` ``` test('async test', async () => { const asyncMock = jest.fn<() => Promise<number>>().mockResolvedValue(43); await asyncMock(); // 43 }); ``` ### `mockFn.mockResolvedValueOnce(value)` Syntactic sugar function for: ``` jest.fn().mockImplementationOnce(() => Promise.resolve(value)); ``` Useful to resolve different values over multiple async calls: * JavaScript * TypeScript ``` test('async test', async () => { const asyncMock = jest .fn() .mockResolvedValue('default') .mockResolvedValueOnce('first call') .mockResolvedValueOnce('second call'); await asyncMock(); // 'first call' await asyncMock(); // 'second call' await asyncMock(); // 'default' await asyncMock(); // 'default' }); ``` ``` test('async test', async () => { const asyncMock = jest .fn<() => Promise<string>>() .mockResolvedValue('default') .mockResolvedValueOnce('first call') .mockResolvedValueOnce('second call'); await asyncMock(); // 'first call' await asyncMock(); // 'second call' await asyncMock(); // 'default' await asyncMock(); // 'default' }); ``` ### `mockFn.mockRejectedValue(value)` Syntactic sugar function for: ``` jest.fn().mockImplementation(() => Promise.reject(value)); ``` Useful to create async mock functions that will always reject: * JavaScript * TypeScript ``` test('async test', async () => { const asyncMock = jest .fn() .mockRejectedValue(new Error('Async error message')); await asyncMock(); // throws 'Async error message' }); ``` ``` test('async test', async () => { const asyncMock = jest .fn<() => Promise<never>>() .mockRejectedValue(new Error('Async error message')); await asyncMock(); // throws 'Async error message' }); ``` ### `mockFn.mockRejectedValueOnce(value)` Syntactic sugar function for: ``` jest.fn().mockImplementationOnce(() => Promise.reject(value)); ``` Useful together with `.mockResolvedValueOnce()` or to reject with different exceptions over multiple async calls: * JavaScript * TypeScript ``` test('async test', async () => { const asyncMock = jest .fn() .mockResolvedValueOnce('first call') .mockRejectedValueOnce(new Error('Async error message')); await asyncMock(); // 'first call' await asyncMock(); // throws 'Async error message' }); ``` ``` test('async test', async () => { const asyncMock = jest .fn<() => Promise<string>>() .mockResolvedValueOnce('first call') .mockRejectedValueOnce(new Error('Async error message')); await asyncMock(); // 'first call' await asyncMock(); // throws 'Async error message' }); ``` TypeScript Usage ---------------- tip Please consult the [Getting Started](../getting-started#using-typescript) guide for details on how to setup Jest with TypeScript. ### `jest.fn(implementation?)` Correct mock typings will be inferred if implementation is passed to [`jest.fn()`](../jest-object#jestfnimplementation). There are many use cases where the implementation is omitted. To ensure type safety you may pass a generic type argument (also see the examples above for more reference): ``` import {expect, jest, test} from '@jest/globals'; import type add from './add'; import calculate from './calc'; test('calculate calls add', () => { // Create a new mock that can be used in place of `add`. const mockAdd = jest.fn<typeof add>(); // `.mockImplementation()` now can infer that `a` and `b` are `number` // and that the returned value is a `number`. mockAdd.mockImplementation((a, b) => { // Yes, this mock is still adding two numbers but imagine this // was a complex function we are mocking. return a + b; }); // `mockAdd` is properly typed and therefore accepted by anything // requiring `add`. calculate(mockAdd, 1, 2); expect(mockAdd).toBeCalledTimes(1); expect(mockAdd).toBeCalledWith(1, 2); }); ``` ### `jest.Mocked<Source>` The `jest.Mocked<Source>` utility type returns the `Source` type wrapped with type definitions of Jest mock function. ``` import {expect, jest, test} from '@jest/globals'; import type {fetch} from 'node-fetch'; jest.mock('node-fetch'); let mockedFetch: jest.Mocked<typeof fetch>; afterEach(() => { mockedFetch.mockClear(); }); test('makes correct call', () => { mockedFetch = getMockedFetch(); // ... }); test('returns correct data', () => { mockedFetch = getMockedFetch(); // ... }); ``` Types of classes, functions or objects can be passed as type argument to `jest.Mocked<Source>`. If you prefer to constrain the input type, use: `jest.MockedClass<Source>`, `jest.MockedFunction<Source>` or `jest.MockedObject<Source>`. ### `jest.mocked(source, options?)` The `mocked()` helper method wraps types of the `source` object and its deep nested members with type definitions of Jest mock function. You can pass `{shallow: true}` as the `options` argument to disable the deeply mocked behavior. Returns the `source` object. ``` export const song = { one: { more: { time: (t: number) => { return t; }, }, }, }; ``` song.ts ``` import {expect, jest, test} from '@jest/globals'; import {song} from './song'; jest.mock('./song'); jest.spyOn(console, 'log'); const mockedSong = jest.mocked(song); // or through `jest.Mocked<Source>` // const mockedSong = song as jest.Mocked<typeof song>; test('deep method is typed correctly', () => { mockedSong.one.more.time.mockReturnValue(12); expect(mockedSong.one.more.time(10)).toBe(12); expect(mockedSong.one.more.time.mock.calls).toHaveLength(1); }); test('direct usage', () => { jest.mocked(console.log).mockImplementation(() => { return; }); console.log('one more time'); expect(jest.mocked(console.log).mock.calls).toHaveLength(1); }); ``` song.test.ts
programming_docs
redis Redis Redis ===== ACL CAT ======== Lists the ACL categories, or the commands inside a category. [Read more](acl-cat/index) ACL DELUSER ============ Deletes ACL users, and terminates their connections. [Read more](acl-deluser/index) ACL DRYRUN =========== Simulates the execution of a command by a user, without executing the command. [Read more](acl-dryrun/index) ACL GENPASS ============ Generates a pseudorandom, secure password that can be used to identify ACL users. [Read more](acl-genpass/index) ACL GETUSER ============ Lists the ACL rules of a user. [Read more](acl-getuser/index) ACL LIST ========= Dumps the effective rules in ACL file format. [Read more](acl-list/index) ACL LOAD ========= Reloads the rules from the configured ACL file. [Read more](acl-load/index) ACL LOG ======== Lists recent security events generated due to ACL rules. [Read more](acl-log/index) ACL SAVE ========= Saves the effective ACL rules in the configured ACL file. [Read more](acl-save/index) ACL SETUSER ============ Creates and modifies an ACL user and its rules. [Read more](acl-setuser/index) ACL USERS ========== Lists all ACL users. [Read more](acl-users/index) ACL WHOAMI =========== Returns the authenticated username of the current connection. [Read more](acl-whoami/index) APPEND ======= Appends a string to the value of a key. Creates the key if it doesn't exist. [Read more](append/index) ASKING ======= Signals that a cluster client is following an -ASK redirect. [Read more](asking/index) AUTH ===== Authenticates the connection. [Read more](auth/index) BF.ADD ======= Adds an item to a Bloom Filter [Read more](bf.add/index) BF.CARD ======== Returns the cardinality of a Bloom filter [Read more](bf.card/index) BF.EXISTS ========== Checks whether an item exists in a Bloom Filter [Read more](bf.exists/index) BF.INFO ======== Returns information about a Bloom Filter [Read more](bf.info/index) BF.INSERT ========== Adds one or more items to a Bloom Filter. A filter will be created if it does not exist [Read more](bf.insert/index) BF.LOADCHUNK ============= Restores a filter previously saved using SCANDUMP [Read more](bf.loadchunk/index) BF.MADD ======== Adds one or more items to a Bloom Filter. A filter will be created if it does not exist [Read more](bf.madd/index) BF.MEXISTS =========== Checks whether one or more items exist in a Bloom Filter [Read more](bf.mexists/index) BF.RESERVE =========== Creates a new Bloom Filter [Read more](bf.reserve/index) BF.SCANDUMP ============ Begins an incremental save of the bloom filter [Read more](bf.scandump/index) BGREWRITEAOF ============= Asynchronously rewrites the append-only file to disk. [Read more](bgrewriteaof/index) BGSAVE ======= Asynchronously saves the database(s) to disk. [Read more](bgsave/index) BITCOUNT ========= Counts the number of set bits (population counting) in a string. [Read more](bitcount/index) BITFIELD ========= Performs arbitrary bitfield integer operations on strings. [Read more](bitfield/index) BITFIELD\_RO ============= Performs arbitrary read-only bitfield integer operations on strings. [Read more](bitfield_ro/index) BITOP ====== Performs bitwise operations on multiple strings, and stores the result. [Read more](bitop/index) BITPOS ======= Finds the first set (1) or clear (0) bit in a string. [Read more](bitpos/index) BLMOVE ======= Pops an element from a list, pushes it to another list and returns it. Blocks until an element is available otherwise. Deletes the list if the last element was moved. [Read more](blmove/index) BLMPOP ======= Pops the first element from one of multiple lists. Blocks until an element is available otherwise. Deletes the list if the last element was popped. [Read more](blmpop/index) BLPOP ====== Removes and returns the first element in a list. Blocks until an element is available otherwise. Deletes the list if the last element was popped. [Read more](blpop/index) BRPOP ====== Removes and returns the last element in a list. Blocks until an element is available otherwise. Deletes the list if the last element was popped. [Read more](brpop/index) BRPOPLPUSH =========== Pops an element from a list, pushes it to another list and returns it. Block until an element is available otherwise. Deletes the list if the last element was popped. [Read more](brpoplpush/index) BZMPOP ======= Removes and returns a member by score from one or more sorted sets. Blocks until a member is available otherwise. Deletes the sorted set if the last element was popped. [Read more](bzmpop/index) BZPOPMAX ========= Removes and returns the member with the highest score from one or more sorted sets. Blocks until a member available otherwise. Deletes the sorted set if the last element was popped. [Read more](bzpopmax/index) BZPOPMIN ========= Removes and returns the member with the lowest score from one or more sorted sets. Blocks until a member is available otherwise. Deletes the sorted set if the last element was popped. [Read more](bzpopmin/index) CF.ADD ======= Adds an item to a Cuckoo Filter [Read more](cf.add/index) CF.ADDNX ========= Adds an item to a Cuckoo Filter if the item did not exist previously. [Read more](cf.addnx/index) CF.COUNT ========= Return the number of times an item might be in a Cuckoo Filter [Read more](cf.count/index) CF.DEL ======= Deletes an item from a Cuckoo Filter [Read more](cf.del/index) CF.EXISTS ========== Checks whether one or more items exist in a Cuckoo Filter [Read more](cf.exists/index) CF.INFO ======== Returns information about a Cuckoo Filter [Read more](cf.info/index) CF.INSERT ========== Adds one or more items to a Cuckoo Filter. A filter will be created if it does not exist [Read more](cf.insert/index) CF.INSERTNX ============ Adds one or more items to a Cuckoo Filter if the items did not exist previously. A filter will be created if it does not exist [Read more](cf.insertnx/index) CF.LOADCHUNK ============= Restores a filter previously saved using SCANDUMP [Read more](cf.loadchunk/index) CF.MEXISTS =========== Checks whether one or more items exist in a Cuckoo Filter [Read more](cf.mexists/index) CF.RESERVE =========== Creates a new Cuckoo Filter [Read more](cf.reserve/index) CF.SCANDUMP ============ Begins an incremental save of the bloom filter [Read more](cf.scandump/index) CLIENT CACHING =============== Instructs the server whether to track the keys in the next request. [Read more](client-caching/index) CLIENT GETNAME =============== Returns the name of the connection. [Read more](client-getname/index) CLIENT GETREDIR ================ Returns the client ID to which the connection's tracking notifications are redirected. [Read more](client-getredir/index) CLIENT ID ========== Returns the unique client ID of the connection. [Read more](client-id/index) CLIENT INFO ============ Returns information about the connection. [Read more](client-info/index) CLIENT KILL ============ Terminates open connections. [Read more](client-kill/index) CLIENT LIST ============ Lists open connections. [Read more](client-list/index) CLIENT NO-EVICT ================ Sets the client eviction mode of the connection. [Read more](client-no-evict/index) CLIENT NO-TOUCH ================ Controls whether commands sent by the client affect the LRU/LFU of accessed keys. [Read more](client-no-touch/index) CLIENT PAUSE ============= Suspends commands processing. [Read more](client-pause/index) CLIENT REPLY ============= Instructs the server whether to reply to commands. [Read more](client-reply/index) CLIENT SETINFO =============== Sets information specific to the client or connection. [Read more](client-setinfo/index) CLIENT SETNAME =============== Sets the connection name. [Read more](client-setname/index) CLIENT TRACKING ================ Controls server-assisted client-side caching for the connection. [Read more](client-tracking/index) CLIENT TRACKINGINFO ==================== Returns information about server-assisted client-side caching for the connection. [Read more](client-trackinginfo/index) CLIENT UNBLOCK =============== Unblocks a client blocked by a blocking command from a different connection. [Read more](client-unblock/index) CLIENT UNPAUSE =============== Resumes processing commands from paused clients. [Read more](client-unpause/index) CLUSTER ADDSLOTS ================= Assigns new hash slots to a node. [Read more](cluster-addslots/index) CLUSTER ADDSLOTSRANGE ====================== Assigns new hash slot ranges to a node. [Read more](cluster-addslotsrange/index) CLUSTER BUMPEPOCH ================== Advances the cluster config epoch. [Read more](cluster-bumpepoch/index) CLUSTER COUNT-FAILURE-REPORTS ============================== Returns the number of active failure reports active for a node. [Read more](cluster-count-failure-reports/index) CLUSTER COUNTKEYSINSLOT ======================== Returns the number of keys in a hash slot. [Read more](cluster-countkeysinslot/index) CLUSTER DELSLOTS ================= Sets hash slots as unbound for a node. [Read more](cluster-delslots/index) CLUSTER DELSLOTSRANGE ====================== Sets hash slot ranges as unbound for a node. [Read more](cluster-delslotsrange/index) CLUSTER FAILOVER ================= Forces a replica to perform a manual failover of its master. [Read more](cluster-failover/index) CLUSTER FLUSHSLOTS =================== Deletes all slots information from a node. [Read more](cluster-flushslots/index) CLUSTER FORGET =============== Removes a node from the nodes table. [Read more](cluster-forget/index) CLUSTER GETKEYSINSLOT ====================== Returns the key names in a hash slot. [Read more](cluster-getkeysinslot/index) CLUSTER INFO ============= Returns information about the state of a node. [Read more](cluster-info/index) CLUSTER KEYSLOT ================ Returns the hash slot for a key. [Read more](cluster-keyslot/index) CLUSTER LINKS ============== Returns a list of all TCP links to and from peer nodes. [Read more](cluster-links/index) CLUSTER MEET ============= Forces a node to handshake with another node. [Read more](cluster-meet/index) CLUSTER MYID ============= Returns the ID of a node. [Read more](cluster-myid/index) CLUSTER MYSHARDID ================== Returns the shard ID of a node. [Read more](cluster-myshardid/index) CLUSTER NODES ============== Returns the cluster configuration for a node. [Read more](cluster-nodes/index) CLUSTER REPLICAS ================= Lists the replica nodes of a master node. [Read more](cluster-replicas/index) CLUSTER REPLICATE ================== Configure a node as replica of a master node. [Read more](cluster-replicate/index) CLUSTER RESET ============== Resets a node. [Read more](cluster-reset/index) CLUSTER SAVECONFIG =================== Forces a node to save the cluster configuration to disk. [Read more](cluster-saveconfig/index) CLUSTER SET-CONFIG-EPOCH ========================= Sets the configuration epoch for a new node. [Read more](cluster-set-config-epoch/index) CLUSTER SETSLOT ================ Binds a hash slot to a node. [Read more](cluster-setslot/index) CLUSTER SHARDS =============== Returns the mapping of cluster slots to shards. [Read more](cluster-shards/index) CLUSTER SLAVES =============== Lists the replica nodes of a master node. [Read more](cluster-slaves/index) CLUSTER SLOTS ============== Returns the mapping of cluster slots to nodes. [Read more](cluster-slots/index) CMS.INCRBY =========== Increases the count of one or more items by increment [Read more](cms.incrby/index) CMS.INFO ========= Returns information about a sketch [Read more](cms.info/index) CMS.INITBYDIM ============== Initializes a Count-Min Sketch to dimensions specified by user [Read more](cms.initbydim/index) CMS.INITBYPROB =============== Initializes a Count-Min Sketch to accommodate requested tolerances. [Read more](cms.initbyprob/index) CMS.MERGE ========== Merges several sketches into one sketch [Read more](cms.merge/index) CMS.QUERY ========== Returns the count for one or more items in a sketch [Read more](cms.query/index) COMMAND ======== Returns detailed information about all commands. [Read more](command/index) COMMAND COUNT ============== Returns a count of commands. [Read more](command-count/index) COMMAND DOCS ============= Returns documentary information about a command. [Read more](command-docs/index) COMMAND GETKEYS ================ Extracts the key names from an arbitrary command. [Read more](command-getkeys/index) COMMAND GETKEYSANDFLAGS ======================== Extracts the key names and access flags for an arbitrary command. [Read more](command-getkeysandflags/index) COMMAND INFO ============= Returns information about one, multiple or all commands. [Read more](command-info/index) COMMAND LIST ============= Returns a list of command names. [Read more](command-list/index) CONFIG GET =========== Returns the effective values of configuration parameters. [Read more](config-get/index) CONFIG RESETSTAT ================= Resets the server's statistics. [Read more](config-resetstat/index) CONFIG REWRITE =============== Persists the effective configuration to file. [Read more](config-rewrite/index) CONFIG SET =========== Sets configuration parameters in-flight. [Read more](config-set/index) COPY ===== Copies the value of a key to a new key. [Read more](copy/index) DBSIZE ======= Returns the number of keys in the database. [Read more](dbsize/index) DECR ===== Decrements the integer value of a key by one. Uses 0 as initial value if the key doesn't exist. [Read more](decr/index) DECRBY ======= Decrements a number from the integer value of a key. Uses 0 as initial value if the key doesn't exist. [Read more](decrby/index) DEL ==== Deletes one or more keys. [Read more](del/index) DISCARD ======== Discards a transaction. [Read more](discard/index) DUMP ===== Returns a serialized representation of the value stored at a key. [Read more](dump/index) ECHO ===== Returns the given string. [Read more](echo/index) EVAL ===== Executes a server-side Lua script. [Read more](eval/index) EVAL\_RO ========= Executes a read-only server-side Lua script. [Read more](eval_ro/index) EVALSHA ======== Executes a server-side Lua script by SHA1 digest. [Read more](evalsha/index) EVALSHA\_RO ============ Executes a read-only server-side Lua script by SHA1 digest. [Read more](evalsha_ro/index) EXEC ===== Executes all commands in a transaction. [Read more](exec/index) EXISTS ======= Determines whether one or more keys exist. [Read more](exists/index) EXPIRE ======= Sets the expiration time of a key in seconds. [Read more](expire/index) EXPIREAT ========= Sets the expiration time of a key to a Unix timestamp. [Read more](expireat/index) EXPIRETIME =========== Returns the expiration time of a key as a Unix timestamp. [Read more](expiretime/index) FAILOVER ========= Starts a coordinated failover from a server to one of its replicas. [Read more](failover/index) FCALL ====== Invokes a function. [Read more](fcall/index) FCALL\_RO ========== Invokes a read-only function. [Read more](fcall_ro/index) FLUSHALL ========= Removes all keys from all databases. [Read more](flushall/index) FLUSHDB ======== Remove all keys from the current database. [Read more](flushdb/index) FT.\_LIST ========== Returns a list of all existing indexes [Read more](ft._list/index) FT.AGGREGATE ============= Run a search query on an index and perform aggregate transformations on the results [Read more](ft.aggregate/index) FT.ALIASADD ============ Adds an alias to the index [Read more](ft.aliasadd/index) FT.ALIASDEL ============ Deletes an alias from the index [Read more](ft.aliasdel/index) FT.ALIASUPDATE =============== Adds or updates an alias to the index [Read more](ft.aliasupdate/index) FT.ALTER ========= Adds a new field to the index [Read more](ft.alter/index) FT.CONFIG GET ============== Retrieves runtime configuration options [Read more](ft.config-get/index) FT.CONFIG SET ============== Sets runtime configuration options [Read more](ft.config-set/index) FT.CREATE ========== Creates an index with the given spec [Read more](ft.create/index) FT.CURSOR DEL ============== Deletes a cursor [Read more](ft.cursor-del/index) FT.CURSOR READ =============== Reads from a cursor [Read more](ft.cursor-read/index) FT.DICTADD =========== Adds terms to a dictionary [Read more](ft.dictadd/index) FT.DICTDEL =========== Deletes terms from a dictionary [Read more](ft.dictdel/index) FT.DICTDUMP ============ Dumps all terms in the given dictionary [Read more](ft.dictdump/index) FT.DROPINDEX ============= Deletes the index [Read more](ft.dropindex/index) FT.EXPLAIN =========== Returns the execution plan for a complex query [Read more](ft.explain/index) FT.EXPLAINCLI ============== Returns the execution plan for a complex query [Read more](ft.explaincli/index) FT.INFO ======== Returns information and statistics on the index [Read more](ft.info/index) FT.PROFILE =========== Performs a `FT.SEARCH` or `FT.AGGREGATE` command and collects performance information [Read more](ft.profile/index) FT.SEARCH ========== Searches the index with a textual query, returning either documents or just ids [Read more](ft.search/index) FT.SPELLCHECK ============== Performs spelling correction on a query, returning suggestions for misspelled terms [Read more](ft.spellcheck/index) FT.SUGADD ========== Adds a suggestion string to an auto-complete suggestion dictionary [Read more](ft.sugadd/index) FT.SUGDEL ========== Deletes a string from a suggestion index [Read more](ft.sugdel/index) FT.SUGGET ========== Gets completion suggestions for a prefix [Read more](ft.sugget/index) FT.SUGLEN ========== Gets the size of an auto-complete suggestion dictionary [Read more](ft.suglen/index) FT.SYNDUMP =========== Dumps the contents of a synonym group [Read more](ft.syndump/index) FT.SYNUPDATE ============= Creates or updates a synonym group with additional terms [Read more](ft.synupdate/index) FT.TAGVALS =========== Returns the distinct tags indexed in a Tag field [Read more](ft.tagvals/index) FUNCTION DELETE ================ Deletes a library and its functions. [Read more](function-delete/index) FUNCTION DUMP ============== Dumps all libraries into a serialized binary payload. [Read more](function-dump/index) FUNCTION FLUSH =============== Deletes all libraries and functions. [Read more](function-flush/index) FUNCTION KILL ============== Terminates a function during execution. [Read more](function-kill/index) FUNCTION LIST ============== Returns information about all libraries. [Read more](function-list/index) FUNCTION LOAD ============== Creates a library. [Read more](function-load/index) FUNCTION RESTORE ================= Restores all libraries from a payload. [Read more](function-restore/index) FUNCTION STATS =============== Returns information about a function during execution. [Read more](function-stats/index) GEOADD ======= Adds one or more members to a geospatial index. The key is created if it doesn't exist. [Read more](geoadd/index) GEODIST ======== Returns the distance between two members of a geospatial index. [Read more](geodist/index) GEOHASH ======== Returns members from a geospatial index as geohash strings. [Read more](geohash/index) GEOPOS ======= Returns the longitude and latitude of members from a geospatial index. [Read more](geopos/index) GEORADIUS ========== Queries a geospatial index for members within a distance from a coordinate, optionally stores the result. [Read more](georadius/index) GEORADIUS\_RO ============== Returns members from a geospatial index that are within a distance from a coordinate. [Read more](georadius_ro/index) GEORADIUSBYMEMBER ================== Queries a geospatial index for members within a distance from a member, optionally stores the result. [Read more](georadiusbymember/index) GEORADIUSBYMEMBER\_RO ====================== Returns members from a geospatial index that are within a distance from a member. [Read more](georadiusbymember_ro/index) GEOSEARCH ========== Queries a geospatial index for members inside an area of a box or a circle. [Read more](geosearch/index) GEOSEARCHSTORE =============== Queries a geospatial index for members inside an area of a box or a circle, optionally stores the result. [Read more](geosearchstore/index) GET ==== Returns the string value of a key. [Read more](get/index) GETBIT ======= Returns a bit value by offset. [Read more](getbit/index) GETDEL ======= Returns the string value of a key after deleting the key. [Read more](getdel/index) GETEX ====== Returns the string value of a key after setting its expiration time. [Read more](getex/index) GETRANGE ========= Returns a substring of the string stored at a key. [Read more](getrange/index) GETSET ======= Returns the previous string value of a key after setting it to a new value. [Read more](getset/index) GRAPH.CONFIG GET ================= Retrieves a RedisGraph configuration [Read more](graph.config-get/index) GRAPH.CONFIG SET ================= Updates a RedisGraph configuration [Read more](graph.config-set/index) GRAPH.CONSTRAINT CREATE ======================== Creates a constraint on specified graph [Read more](graph.constraint-create/index) GRAPH.CONSTRAINT DROP ====================== Deletes a constraint from specified graph [Read more](graph.constraint-drop/index) GRAPH.DELETE ============= Completely removes the graph and all of its entities [Read more](graph.delete/index) GRAPH.EXPLAIN ============== Returns a query execution plan without running the query [Read more](graph.explain/index) GRAPH.LIST =========== Lists all graph keys in the keyspace [Read more](graph.list/index) GRAPH.PROFILE ============== Executes a query and returns an execution plan augmented with metrics for each operation's execution [Read more](graph.profile/index) GRAPH.QUERY ============ Executes the given query against a specified graph [Read more](graph.query/index) GRAPH.RO\_QUERY ================ Executes a given read only query against a specified graph [Read more](graph.ro_query/index) GRAPH.SLOWLOG ============== Returns a list containing up to 10 of the slowest queries issued against the given graph [Read more](graph.slowlog/index) HDEL ===== Deletes one or more fields and their values from a hash. Deletes the hash if no fields remain. [Read more](hdel/index) HELLO ====== Handshakes with the Redis server. [Read more](hello/index) HEXISTS ======== Determines whether a field exists in a hash. [Read more](hexists/index) HGET ===== Returns the value of a field in a hash. [Read more](hget/index) HGETALL ======== Returns all fields and values in a hash. [Read more](hgetall/index) HINCRBY ======== Increments the integer value of a field in a hash by a number. Uses 0 as initial value if the field doesn't exist. [Read more](hincrby/index) HINCRBYFLOAT ============= Increments the floating point value of a field by a number. Uses 0 as initial value if the field doesn't exist. [Read more](hincrbyfloat/index) HKEYS ====== Returns all fields in a hash. [Read more](hkeys/index) HLEN ===== Returns the number of fields in a hash. [Read more](hlen/index) HMGET ====== Returns the values of all fields in a hash. [Read more](hmget/index) HMSET ====== Sets the values of multiple fields. [Read more](hmset/index) HRANDFIELD =========== Returns one or more random fields from a hash. [Read more](hrandfield/index) HSCAN ====== Iterates over fields and values of a hash. [Read more](hscan/index) HSET ===== Creates or modifies the value of a field in a hash. [Read more](hset/index) HSETNX ======= Sets the value of a field in a hash only when the field doesn't exist. [Read more](hsetnx/index) HSTRLEN ======== Returns the length of the value of a field. [Read more](hstrlen/index) HVALS ====== Returns all values in a hash. [Read more](hvals/index) INCR ===== Increments the integer value of a key by one. Uses 0 as initial value if the key doesn't exist. [Read more](incr/index) INCRBY ======= Increments the integer value of a key by a number. Uses 0 as initial value if the key doesn't exist. [Read more](incrby/index) INCRBYFLOAT ============ Increment the floating point value of a key by a number. Uses 0 as initial value if the key doesn't exist. [Read more](incrbyfloat/index) INFO ===== Returns information and statistics about the server. [Read more](info/index) JSON.ARRAPPEND =============== Append one or more json values into the array at path after the last element in it. [Read more](json.arrappend/index) JSON.ARRINDEX ============== Returns the index of the first occurrence of a JSON scalar value in the array at path [Read more](json.arrindex/index) JSON.ARRINSERT =============== Inserts the JSON scalar(s) value at the specified index in the array at path [Read more](json.arrinsert/index) JSON.ARRLEN ============ Returns the length of the array at path [Read more](json.arrlen/index) JSON.ARRPOP ============ Removes and returns the element at the specified index in the array at path [Read more](json.arrpop/index) JSON.ARRTRIM ============= Trims the array at path to contain only the specified inclusive range of indices from start to stop [Read more](json.arrtrim/index) JSON.CLEAR =========== Clears all values from an array or an object and sets numeric values to `0` [Read more](json.clear/index) JSON.DEBUG =========== Debugging container command [Read more](json.debug/index) JSON.DEBUG MEMORY ================== Reports the size in bytes of a key [Read more](json.debug-memory/index) JSON.DEL ========= Deletes a value [Read more](json.del/index) JSON.FORGET ============ Deletes a value [Read more](json.forget/index) JSON.GET ========= Gets the value at one or more paths in JSON serialized form [Read more](json.get/index) JSON.MGET ========== Returns the values at a path from one or more keys [Read more](json.mget/index) JSON.NUMINCRBY =============== Increments the numeric value at path by a value [Read more](json.numincrby/index) JSON.NUMMULTBY =============== Multiplies the numeric value at path by a value [Read more](json.nummultby/index) JSON.OBJKEYS ============= Returns the JSON keys of the object at path [Read more](json.objkeys/index) JSON.OBJLEN ============ Returns the number of keys of the object at path [Read more](json.objlen/index) JSON.RESP ========== Returns the JSON value at path in Redis Serialization Protocol (RESP) [Read more](json.resp/index) JSON.SET ========= Sets or updates the JSON value at a path [Read more](json.set/index) JSON.STRAPPEND =============== Appends a string to a JSON string value at path [Read more](json.strappend/index) JSON.STRLEN ============ Returns the length of the JSON String at path in key [Read more](json.strlen/index) JSON.TOGGLE ============ Toggles a boolean value [Read more](json.toggle/index) JSON.TYPE ========== Returns the type of the JSON value at path [Read more](json.type/index) KEYS ===== Returns all key names that match a pattern. [Read more](keys/index) LASTSAVE ========= Returns the Unix timestamp of the last successful save to disk. [Read more](lastsave/index) LATENCY DOCTOR =============== Returns a human-readable latency analysis report. [Read more](latency-doctor/index) LATENCY GRAPH ============== Returns a latency graph for an event. [Read more](latency-graph/index) LATENCY HISTOGRAM ================== Returns the cumulative distribution of latencies of a subset or all commands. [Read more](latency-histogram/index) LATENCY HISTORY ================ Returns timestamp-latency samples for an event. [Read more](latency-history/index) LATENCY LATEST =============== Returns the latest latency samples for all events. [Read more](latency-latest/index) LATENCY RESET ============== Resets the latency data for one or more events. [Read more](latency-reset/index) LCS ==== Finds the longest common substring. [Read more](lcs/index) LINDEX ======= Returns an element from a list by its index. [Read more](lindex/index) LINSERT ======== Inserts an element before or after another element in a list. [Read more](linsert/index) LLEN ===== Returns the length of a list. [Read more](llen/index) LMOVE ====== Returns an element after popping it from one list and pushing it to another. Deletes the list if the last element was moved. [Read more](lmove/index) LMPOP ====== Returns multiple elements from a list after removing them. Deletes the list if the last element was popped. [Read more](lmpop/index) LOLWUT ======= Displays computer art and the Redis version [Read more](lolwut/index) LPOP ===== Returns the first elements in a list after removing it. Deletes the list if the last element was popped. [Read more](lpop/index) LPOS ===== Returns the index of matching elements in a list. [Read more](lpos/index) LPUSH ====== Prepends one or more elements to a list. Creates the key if it doesn't exist. [Read more](lpush/index) LPUSHX ======= Prepends one or more elements to a list only when the list exists. [Read more](lpushx/index) LRANGE ======= Returns a range of elements from a list. [Read more](lrange/index) LREM ===== Removes elements from a list. Deletes the list if the last element was removed. [Read more](lrem/index) LSET ===== Sets the value of an element in a list by its index. [Read more](lset/index) LTRIM ====== Removes elements from both ends a list. Deletes the list if all elements were trimmed. [Read more](ltrim/index) MEMORY DOCTOR ============== Outputs a memory problems report. [Read more](memory-doctor/index) MEMORY MALLOC-STATS ==================== Returns the allocator statistics. [Read more](memory-malloc-stats/index) MEMORY PURGE ============= Asks the allocator to release memory. [Read more](memory-purge/index) MEMORY STATS ============= Returns details about memory usage. [Read more](memory-stats/index) MEMORY USAGE ============= Estimates the memory usage of a key. [Read more](memory-usage/index) MGET ===== Atomically returns the string values of one or more keys. [Read more](mget/index) MIGRATE ======== Atomically transfers a key from one Redis instance to another. [Read more](migrate/index) MODULE LIST ============ Returns all loaded modules. [Read more](module-list/index) MODULE LOAD ============ Loads a module. [Read more](module-load/index) MODULE LOADEX ============== Loads a module using extended parameters. [Read more](module-loadex/index) MODULE UNLOAD ============== Unloads a module. [Read more](module-unload/index) MONITOR ======== Listens for all requests received by the server in real-time. [Read more](monitor/index) MOVE ===== Moves a key to another database. [Read more](move/index) MSET ===== Atomically creates or modifies the string values of one or more keys. [Read more](mset/index) MSETNX ======= Atomically modifies the string values of one or more keys only when all keys don't exist. [Read more](msetnx/index) MULTI ====== Starts a transaction. [Read more](multi/index) OBJECT ENCODING ================ Returns the internal encoding of a Redis object. [Read more](object-encoding/index) OBJECT FREQ ============ Returns the logarithmic access frequency counter of a Redis object. [Read more](object-freq/index) OBJECT IDLETIME ================ Returns the time since the last access to a Redis object. [Read more](object-idletime/index) OBJECT REFCOUNT ================ Returns the reference count of a value of a key. [Read more](object-refcount/index) PERSIST ======== Removes the expiration time of a key. [Read more](persist/index) PEXPIRE ======== Sets the expiration time of a key in milliseconds. [Read more](pexpire/index) PEXPIREAT ========== Sets the expiration time of a key to a Unix milliseconds timestamp. [Read more](pexpireat/index) PEXPIRETIME ============ Returns the expiration time of a key as a Unix milliseconds timestamp. [Read more](pexpiretime/index) PFADD ====== Adds elements to a HyperLogLog key. Creates the key if it doesn't exist. [Read more](pfadd/index) PFCOUNT ======== Returns the approximated cardinality of the set(s) observed by the HyperLogLog key(s). [Read more](pfcount/index) PFDEBUG ======== Internal commands for debugging HyperLogLog values. [Read more](pfdebug/index) PFMERGE ======== Merges one or more HyperLogLog values into a single key. [Read more](pfmerge/index) PFSELFTEST =========== An internal command for testing HyperLogLog values. [Read more](pfselftest/index) PING ===== Returns the server's liveliness response. [Read more](ping/index) PSETEX ======= Sets both string value and expiration time in milliseconds of a key. The key is created if it doesn't exist. [Read more](psetex/index) PSUBSCRIBE =========== Listens for messages published to channels that match one or more patterns. [Read more](psubscribe/index) PSYNC ====== An internal command used in replication. [Read more](psync/index) PTTL ===== Returns the expiration time in milliseconds of a key. [Read more](pttl/index) PUBLISH ======== Posts a message to a channel. [Read more](publish/index) PUBSUB CHANNELS ================ Returns the active channels. [Read more](pubsub-channels/index) PUBSUB NUMPAT ============== Returns a count of unique pattern subscriptions. [Read more](pubsub-numpat/index) PUBSUB NUMSUB ============== Returns a count of subscribers to channels. [Read more](pubsub-numsub/index) PUBSUB SHARDCHANNELS ===================== Returns the active shard channels. [Read more](pubsub-shardchannels/index) PUBSUB SHARDNUMSUB =================== Returns the count of subscribers of shard channels. [Read more](pubsub-shardnumsub/index) PUNSUBSCRIBE ============= Stops listening to messages published to channels that match one or more patterns. [Read more](punsubscribe/index) QUIT ===== Closes the connection. [Read more](quit/index) RANDOMKEY ========== Returns a random key name from the database. [Read more](randomkey/index) READONLY ========= Enables read-only queries for a connection to a Redis Cluster replica node. [Read more](readonly/index) READWRITE ========== Enables read-write queries for a connection to a Reids Cluster replica node. [Read more](readwrite/index) RENAME ======= Renames a key and overwrites the destination. [Read more](rename/index) RENAMENX ========= Renames a key only when the target key name doesn't exist. [Read more](renamenx/index) REPLCONF ========= An internal command for configuring the replication stream. [Read more](replconf/index) REPLICAOF ========== Configures a server as replica of another, or promotes it to a master. [Read more](replicaof/index) RESET ====== Resets the connection. [Read more](reset/index) RESTORE ======== Creates a key from the serialized representation of a value. [Read more](restore/index) RESTORE-ASKING =============== An internal command for migrating keys in a cluster. [Read more](restore-asking/index) ROLE ===== Returns the replication role. [Read more](role/index) RPOP ===== Returns and removes the last elements of a list. Deletes the list if the lst element was popped. [Read more](rpop/index) RPOPLPUSH ========== Returns the last element of a list after removing and pushing it to another list. Deletes the list if the lst element was popped. [Read more](rpoplpush/index) RPUSH ====== Appends one or more elements to a list. Creates the key if it doesn't exist. [Read more](rpush/index) RPUSHX ======= Appends an element to a list only when the list exists. [Read more](rpushx/index) SADD ===== Adds one or more members to a set. Creates the key if it doesn't exist. [Read more](sadd/index) SAVE ===== Synchronously saves the database(s) to disk. [Read more](save/index) SCAN ===== Iterates over the key names in the database. [Read more](scan/index) SCARD ====== Returns the number of members in a set. [Read more](scard/index) SCRIPT DEBUG ============= Sets the debug mode of server-side Lua scripts. [Read more](script-debug/index) SCRIPT EXISTS ============== Determines whether server-side Lua scripts exist in the script cache. [Read more](script-exists/index) SCRIPT FLUSH ============= Removes all server-side Lua scripts from the script cache. [Read more](script-flush/index) SCRIPT KILL ============ Terminates a server-side Lua script during execution. [Read more](script-kill/index) SCRIPT LOAD ============ Loads a server-side Lua script to the script cache. [Read more](script-load/index) SDIFF ====== Returns the difference of multiple sets. [Read more](sdiff/index) SDIFFSTORE =========== Stores the difference of multiple sets in a key. [Read more](sdiffstore/index) SELECT ======= Changes the selected database. [Read more](select/index) SET ==== Sets the string value of a key, ignoring its type. The key is created if it doesn't exist. [Read more](set/index) SETBIT ======= Sets or clears the bit at offset of the string value. Creates the key if it doesn't exist. [Read more](setbit/index) SETEX ====== Sets the string value and expiration time of a key. Creates the key if it doesn't exist. [Read more](setex/index) SETNX ====== Set the string value of a key only when the key doesn't exist. [Read more](setnx/index) SETRANGE ========= Overwrites a part of a string value with another by an offset. Creates the key if it doesn't exist. [Read more](setrange/index) SHUTDOWN ========= Synchronously saves the database(s) to disk and shuts down the Redis server. [Read more](shutdown/index) SINTER ======= Returns the intersect of multiple sets. [Read more](sinter/index) SINTERCARD =========== Returns the number of members of the intersect of multiple sets. [Read more](sintercard/index) SINTERSTORE ============ Stores the intersect of multiple sets in a key. [Read more](sinterstore/index) SISMEMBER ========== Determines whether a member belongs to a set. [Read more](sismember/index) SLAVEOF ======== Sets a Redis server as a replica of another, or promotes it to being a master. [Read more](slaveof/index) SLOWLOG GET ============ Returns the slow log's entries. [Read more](slowlog-get/index) SLOWLOG LEN ============ Returns the number of entries in the slow log. [Read more](slowlog-len/index) SLOWLOG RESET ============== Clears all entries from the slow log. [Read more](slowlog-reset/index) SMEMBERS ========= Returns all members of a set. [Read more](smembers/index) SMISMEMBER =========== Determines whether multiple members belong to a set. [Read more](smismember/index) SMOVE ====== Moves a member from one set to another. [Read more](smove/index) SORT ===== Sorts the elements in a list, a set, or a sorted set, optionally storing the result. [Read more](sort/index) SORT\_RO ========= Returns the sorted elements of a list, a set, or a sorted set. [Read more](sort_ro/index) SPOP ===== Returns one or more random members from a set after removing them. Deletes the set if the last member was popped. [Read more](spop/index) SPUBLISH ========= Post a message to a shard channel [Read more](spublish/index) SRANDMEMBER ============ Get one or multiple random members from a set [Read more](srandmember/index) SREM ===== Removes one or more members from a set. Deletes the set if the last member was removed. [Read more](srem/index) SSCAN ====== Iterates over members of a set. [Read more](sscan/index) SSUBSCRIBE =========== Listens for messages published to shard channels. [Read more](ssubscribe/index) STRLEN ======= Returns the length of a string value. [Read more](strlen/index) SUBSCRIBE ========== Listens for messages published to channels. [Read more](subscribe/index) SUBSTR ======= Returns a substring from a string value. [Read more](substr/index) SUNION ======= Returns the union of multiple sets. [Read more](sunion/index) SUNIONSTORE ============ Stores the union of multiple sets in a key. [Read more](sunionstore/index) SUNSUBSCRIBE ============= Stops listening to messages posted to shard channels. [Read more](sunsubscribe/index) SWAPDB ======= Swaps two Redis databases. [Read more](swapdb/index) SYNC ===== An internal command used in replication. [Read more](sync/index) TDIGEST.ADD ============ Adds one or more observations to a t-digest sketch [Read more](tdigest.add/index) TDIGEST.BYRANK =============== Returns, for each input rank, an estimation of the value (floating-point) with that rank [Read more](tdigest.byrank/index) TDIGEST.BYREVRANK ================== Returns, for each input reverse rank, an estimation of the value (floating-point) with that reverse rank [Read more](tdigest.byrevrank/index) TDIGEST.CDF ============ Returns, for each input value, an estimation of the fraction (floating-point) of (observations smaller than the given value + half the observations equal to the given value) [Read more](tdigest.cdf/index) TDIGEST.CREATE =============== Allocates memory and initializes a new t-digest sketch [Read more](tdigest.create/index) TDIGEST.INFO ============= Returns information and statistics about a t-digest sketch [Read more](tdigest.info/index) TDIGEST.MAX ============ Returns the maximum observation value from a t-digest sketch [Read more](tdigest.max/index) TDIGEST.MERGE ============== Merges multiple t-digest sketches into a single sketch [Read more](tdigest.merge/index) TDIGEST.MIN ============ Returns the minimum observation value from a t-digest sketch [Read more](tdigest.min/index) TDIGEST.QUANTILE ================= Returns, for each input fraction, an estimation of the value (floating point) that is smaller than the given fraction of observations [Read more](tdigest.quantile/index) TDIGEST.RANK ============= Returns, for each input value (floating-point), the estimated rank of the value (the number of observations in the sketch that are smaller than the value + half the number of observations that are equal to the value) [Read more](tdigest.rank/index) TDIGEST.RESET ============== Resets a t-digest sketch: empty the sketch and re-initializes it. [Read more](tdigest.reset/index) TDIGEST.REVRANK ================ Returns, for each input value (floating-point), the estimated reverse rank of the value (the number of observations in the sketch that are larger than the value + half the number of observations that are equal to the value) [Read more](tdigest.revrank/index) TDIGEST.TRIMMED\_MEAN ====================== Returns an estimation of the mean value from the sketch, excluding observation values outside the low and high cutoff quantiles [Read more](tdigest.trimmed_mean/index) TIME ===== Returns the server time. [Read more](time/index) TOPK.ADD ========= Increases the count of one or more items by increment [Read more](topk.add/index) TOPK.COUNT =========== Return the count for one or more items are in a sketch [Read more](topk.count/index) TOPK.INCRBY ============ Increases the count of one or more items by increment [Read more](topk.incrby/index) TOPK.INFO ========== Returns information about a sketch [Read more](topk.info/index) TOPK.LIST ========== Return full list of items in Top K list [Read more](topk.list/index) TOPK.QUERY =========== Checks whether one or more items are in a sketch [Read more](topk.query/index) TOPK.RESERVE ============= Initializes a TopK with specified parameters [Read more](topk.reserve/index) TOUCH ====== Returns the number of existing keys out of those specified after updating the time they were last accessed. [Read more](touch/index) TS.ADD ======= Append a sample to a time series [Read more](ts.add/index) TS.ALTER ========= Update the retention, chunk size, duplicate policy, and labels of an existing time series [Read more](ts.alter/index) TS.CREATE ========== Create a new time series [Read more](ts.create/index) TS.CREATERULE ============== Create a compaction rule [Read more](ts.createrule/index) TS.DECRBY ========== Decrease the value of the sample with the maximal existing timestamp, or create a new sample with a value equal to the value of the sample with the maximal existing timestamp with a given decrement [Read more](ts.decrby/index) TS.DEL ======= Delete all samples between two timestamps for a given time series [Read more](ts.del/index) TS.DELETERULE ============== Delete a compaction rule [Read more](ts.deleterule/index) TS.GET ======= Get the sample with the highest timestamp from a given time series [Read more](ts.get/index) TS.INCRBY ========== Increase the value of the sample with the maximal existing timestamp, or create a new sample with a value equal to the value of the sample with the maximal existing timestamp with a given increment [Read more](ts.incrby/index) TS.INFO ======== Returns information and statistics for a time series [Read more](ts.info/index) TS.MADD ======== Append new samples to one or more time series [Read more](ts.madd/index) TS.MGET ======== Get the sample with the highest timestamp from each time series matching a specific filter [Read more](ts.mget/index) TS.MRANGE ========== Query a range across multiple time series by filters in forward direction [Read more](ts.mrange/index) TS.MREVRANGE ============= Query a range across multiple time-series by filters in reverse direction [Read more](ts.mrevrange/index) TS.QUERYINDEX ============== Get all time series keys matching a filter list [Read more](ts.queryindex/index) TS.RANGE ========= Query a range in forward direction [Read more](ts.range/index) TS.REVRANGE ============ Query a range in reverse direction [Read more](ts.revrange/index) TTL ==== Returns the expiration time in seconds of a key. [Read more](ttl/index) TYPE ===== Determines the type of value stored at a key. [Read more](type/index) UNLINK ======= Asynchronously deletes one or more keys. [Read more](unlink/index) UNSUBSCRIBE ============ Stops listening to messages posted to channels. [Read more](unsubscribe/index) UNWATCH ======== Forgets about watched keys of a transaction. [Read more](unwatch/index) WAIT ===== Blocks until the asynchronous replication of all preceding write commands sent by the connection is completed. [Read more](wait/index) WAITAOF ======== Blocks until all of the preceding write commands sent by the connection are written to the append-only file of the master and/or replicas. [Read more](waitaof/index) WATCH ====== Monitors changes to keys to determine the execution of a transaction. [Read more](watch/index) XACK ===== Returns the number of messages that were successfully acknowledged by the consumer group member of a stream. [Read more](xack/index) XADD ===== Appends a new message to a stream. Creates the key if it doesn't exist. [Read more](xadd/index) XAUTOCLAIM =========== Changes, or acquires, ownership of messages in a consumer group, as if the messages were delivered to as consumer group member. [Read more](xautoclaim/index) XCLAIM ======= Changes, or acquires, ownership of a message in a consumer group, as if the message was delivered a consumer group member. [Read more](xclaim/index) XDEL ===== Returns the number of messages after removing them from a stream. [Read more](xdel/index) XGROUP CREATE ============== Creates a consumer group. [Read more](xgroup-create/index) XGROUP CREATECONSUMER ====================== Creates a consumer in a consumer group. [Read more](xgroup-createconsumer/index) XGROUP DELCONSUMER =================== Deletes a consumer from a consumer group. [Read more](xgroup-delconsumer/index) XGROUP DESTROY =============== Destroys a consumer group. [Read more](xgroup-destroy/index) XGROUP SETID ============= Sets the last-delivered ID of a consumer group. [Read more](xgroup-setid/index) XINFO CONSUMERS ================ Returns a list of the consumers in a consumer group. [Read more](xinfo-consumers/index) XINFO GROUPS ============= Returns a list of the consumer groups of a stream. [Read more](xinfo-groups/index) XINFO STREAM ============= Returns information about a stream. [Read more](xinfo-stream/index) XLEN ===== Return the number of messages in a stream. [Read more](xlen/index) XPENDING ========= Returns the information and entries from a stream consumer group's pending entries list. [Read more](xpending/index) XRANGE ======= Returns the messages from a stream within a range of IDs. [Read more](xrange/index) XREAD ====== Returns messages from multiple streams with IDs greater than the ones requested. Blocks until a message is available otherwise. [Read more](xread/index) XREADGROUP =========== Returns new or historical messages from a stream for a consumer in agroup. Blocks until a message is available otherwise. [Read more](xreadgroup/index) XREVRANGE ========== Returns the messages from a stream within a range of IDs in reverse order. [Read more](xrevrange/index) XSETID ======= An internal command for replicating stream values. [Read more](xsetid/index) XTRIM ====== Deletes messages from the beginning of a stream. [Read more](xtrim/index) ZADD ===== Adds one or more members to a sorted set, or updates their scores. Creates the key if it doesn't exist. [Read more](zadd/index) ZCARD ====== Returns the number of members in a sorted set. [Read more](zcard/index) ZCOUNT ======= Returns the count of members in a sorted set that have scores within a range. [Read more](zcount/index) ZDIFF ====== Returns the difference between multiple sorted sets. [Read more](zdiff/index) ZDIFFSTORE =========== Stores the difference of multiple sorted sets in a key. [Read more](zdiffstore/index) ZINCRBY ======== Increments the score of a member in a sorted set. [Read more](zincrby/index) ZINTER ======= Returns the intersect of multiple sorted sets. [Read more](zinter/index) ZINTERCARD =========== Returns the number of members of the intersect of multiple sorted sets. [Read more](zintercard/index) ZINTERSTORE ============ Stores the intersect of multiple sorted sets in a key. [Read more](zinterstore/index) ZLEXCOUNT ========== Returns the number of members in a sorted set within a lexicographical range. [Read more](zlexcount/index) ZMPOP ====== Returns the highest- or lowest-scoring members from one or more sorted sets after removing them. Deletes the sorted set if the last member was popped. [Read more](zmpop/index) ZMSCORE ======== Returns the score of one or more members in a sorted set. [Read more](zmscore/index) ZPOPMAX ======== Returns the highest-scoring members from a sorted set after removing them. Deletes the sorted set if the last member was popped. [Read more](zpopmax/index) ZPOPMIN ======== Returns the lowest-scoring members from a sorted set after removing them. Deletes the sorted set if the last member was popped. [Read more](zpopmin/index) ZRANDMEMBER ============ Returns one or more random members from a sorted set. [Read more](zrandmember/index) ZRANGE ======= Returns members in a sorted set within a range of indexes. [Read more](zrange/index) ZRANGEBYLEX ============ Returns members in a sorted set within a lexicographical range. [Read more](zrangebylex/index) ZRANGEBYSCORE ============== Returns members in a sorted set within a range of scores. [Read more](zrangebyscore/index) ZRANGESTORE ============ Stores a range of members from sorted set in a key. [Read more](zrangestore/index) ZRANK ====== Returns the index of a member in a sorted set ordered by ascending scores. [Read more](zrank/index) ZREM ===== Removes one or more members from a sorted set. Deletes the sorted set if all members were removed. [Read more](zrem/index) ZREMRANGEBYLEX =============== Removes members in a sorted set within a lexicographical range. Deletes the sorted set if all members were removed. [Read more](zremrangebylex/index) ZREMRANGEBYRANK ================ Removes members in a sorted set within a range of indexes. Deletes the sorted set if all members were removed. [Read more](zremrangebyrank/index) ZREMRANGEBYSCORE ================= Removes members in a sorted set within a range of scores. Deletes the sorted set if all members were removed. [Read more](zremrangebyscore/index) ZREVRANGE ========== Returns members in a sorted set within a range of indexes in reverse order. [Read more](zrevrange/index) ZREVRANGEBYLEX =============== Returns members in a sorted set within a lexicographical range in reverse order. [Read more](zrevrangebylex/index) ZREVRANGEBYSCORE ================= Returns members in a sorted set within a range of scores in reverse order. [Read more](zrevrangebyscore/index) ZREVRANK ========= Returns the index of a member in a sorted set ordered by descending scores. [Read more](zrevrank/index) ZSCAN ====== Iterates over members and scores of a sorted set. [Read more](zscan/index) ZSCORE ======= Returns the score of a member in a sorted set. [Read more](zscore/index) ZUNION ======= Returns the union of multiple sorted sets. [Read more](zunion/index) ZUNIONSTORE ============ Stores the union of multiple sorted sets in a key. [Read more](zunionstore/index)
programming_docs
redis ZREMRANGEBYRANK ZREMRANGEBYRANK =============== ``` ZREMRANGEBYRANK ``` Syntax ``` ZREMRANGEBYRANK key start stop ``` Available since: 2.0.0 Time complexity: O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements removed by the operation. ACL categories: `@write`, `@sortedset`, `@slow`, Removes all elements in the sorted set stored at `key` with rank between `start` and `stop`. Both `start` and `stop` are `0` -based indexes with `0` being the element with the lowest score. These indexes can be negative numbers, where they indicate offsets starting at the element with the highest score. For example: `-1` is the element with the highest score, `-2` the element with the second highest score and so forth. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of elements removed. Examples -------- ``` ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZREMRANGEBYRANK myzset 0 1 ZRANGE myzset 0 -1 WITHSCORES ``` redis ZLEXCOUNT ZLEXCOUNT ========= ``` ZLEXCOUNT ``` Syntax ``` ZLEXCOUNT key min max ``` Available since: 2.8.9 Time complexity: O(log(N)) with N being the number of elements in the sorted set. ACL categories: `@read`, `@sortedset`, `@fast`, When all the elements in a sorted set are inserted with the same score, in order to force lexicographical ordering, this command returns the number of elements in the sorted set at `key` with a value between `min` and `max`. The `min` and `max` arguments have the same meaning as described for [`ZRANGEBYLEX`](../zrangebylex). Note: the command has a complexity of just O(log(N)) because it uses elements ranks (see [`ZRANK`](../zrank)) to get an idea of the range. Because of this there is no need to do a work proportional to the size of the range. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of elements in the specified score range. Examples -------- ``` ZADD myzset 0 a 0 b 0 c 0 d 0 e ZADD myzset 0 f 0 g ZLEXCOUNT myzset - + ZLEXCOUNT myzset [b [f ``` redis FT.EXPLAIN FT.EXPLAIN ========== ``` FT.EXPLAIN ``` Syntax ``` FT.EXPLAIN index query [DIALECT dialect] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.0.0](https://redis.io/docs/stack/search) Time complexity: O(1) Return the execution plan for a complex query [Examples](#examples) Required arguments ------------------ `index` is index name. You must first create the index using [`FT.CREATE`](../ft.create). `query` is query string, as if sent to FT.SEARCH`. Optional arguments ------------------ `DIALECT {dialect_version}` is dialect version under which to execute the query. If not specified, the query executes under the default dialect version set during module initial loading or via [`FT.CONFIG SET`](../ft.config-set) command. Notes * In the returned response, a `+` on a term is an indication of stemming. * Use `redis-cli --raw` to properly read line-breaks in the returned response. Return ------ FT.EXPLAIN returns a string representing the execution plan. Examples -------- **Return the execution plan for a complex query** ``` $ redis-cli --raw 127.0.0.1:6379> FT.EXPLAIN rd "(foo bar)|(hello world) @date:[100 200]|@date:[500 +inf]" INTERSECT { UNION { INTERSECT { foo bar } INTERSECT { hello world } } UNION { NUMERIC {100.000000 <= x <= 200.000000} NUMERIC {500.000000 <= x <= inf} } } ``` See also -------- [`FT.CREATE`](../ft.create) | [`FT.SEARCH`](../ft.search) | [`FT.CONFIG SET`](../ft.config-set) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) redis ZREVRANGE ZREVRANGE ========= ``` ZREVRANGE (deprecated) ``` As of Redis version 6.2.0, this command is regarded as deprecated. It can be replaced by [`ZRANGE`](../zrange) with the `REV` argument when migrating or writing new code. Syntax ``` ZREVRANGE key start stop [WITHSCORES] ``` Available since: 1.2.0 Time complexity: O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements returned. ACL categories: `@read`, `@sortedset`, `@slow`, Returns the specified range of elements in the sorted set stored at `key`. The elements are considered to be ordered from the highest to the lowest score. Descending lexicographical order is used for elements with equal score. Apart from the reversed ordering, `ZREVRANGE` is similar to [`ZRANGE`](../zrange). Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of elements in the specified range (optionally with their scores). Examples -------- ``` ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZREVRANGE myzset 0 -1 ZREVRANGE myzset 2 3 ZREVRANGE myzset -2 -1 ``` redis CLUSTER CLUSTER ======= ``` CLUSTER BUMPEPOCH ``` Syntax ``` CLUSTER BUMPEPOCH ``` Available since: 3.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, Advances the cluster config epoch. The `CLUSTER BUMPEPOCH` command triggers an increment to the cluster's config epoch from the connected node. The epoch will be incremented if the node's config epoch is zero, or if it is less than the cluster's greatest epoch. **Note:** config epoch management is performed internally by the cluster, and relies on obtaining a consensus of nodes. The `CLUSTER BUMPEPOCH` attempts to increment the config epoch **WITHOUT** getting the consensus, so using it may violate the "last failover wins" rule. Use it with caution. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `BUMPED` if the epoch was incremented, or `STILL` if the node already has the greatest config epoch in the cluster. redis LPOS LPOS ==== ``` LPOS ``` Syntax ``` LPOS key element [RANK rank] [COUNT num-matches] [MAXLEN len] ``` Available since: 6.0.6 Time complexity: O(N) where N is the number of elements in the list, for the average case. When searching for elements near the head or the tail of the list, or when the MAXLEN option is provided, the command may run in constant time. ACL categories: `@read`, `@list`, `@slow`, The command returns the index of matching elements inside a Redis list. By default, when no options are given, it will scan the list from head to tail, looking for the first match of "element". If the element is found, its index (the zero-based position in the list) is returned. Otherwise, if no match is found, `nil` is returned. ``` > RPUSH mylist a b c 1 2 3 c c > LPOS mylist c 2 ``` The optional arguments and options can modify the command's behavior. The `RANK` option specifies the "rank" of the first element to return, in case there are multiple matches. A rank of 1 means to return the first match, 2 to return the second match, and so forth. For instance, in the above example the element "c" is present multiple times, if I want the index of the second match, I'll write: ``` > LPOS mylist c RANK 2 6 ``` That is, the second occurrence of "c" is at position 6. A negative "rank" as the `RANK` argument tells `LPOS` to invert the search direction, starting from the tail to the head. So, we want to say, give me the first element starting from the tail of the list: ``` > LPOS mylist c RANK -1 7 ``` Note that the indexes are still reported in the "natural" way, that is, considering the first element starting from the head of the list at index 0, the next element at index 1, and so forth. This basically means that the returned indexes are stable whatever the rank is positive or negative. Sometimes we want to return not just the Nth matching element, but the position of all the first N matching elements. This can be achieved using the `COUNT` option. ``` > LPOS mylist c COUNT 2 [2,6] ``` We can combine `COUNT` and `RANK`, so that `COUNT` will try to return up to the specified number of matches, but starting from the Nth match, as specified by the `RANK` option. ``` > LPOS mylist c RANK -1 COUNT 2 [7,6] ``` When `COUNT` is used, it is possible to specify 0 as the number of matches, as a way to tell the command we want all the matches found returned as an array of indexes. This is better than giving a very large `COUNT` option because it is more general. ``` > LPOS mylist c COUNT 0 [2,6,7] ``` When `COUNT` is used and no match is found, an empty array is returned. However when `COUNT` is not used and there are no matches, the command returns `nil`. Finally, the `MAXLEN` option tells the command to compare the provided element only with a given maximum number of list items. So for instance specifying `MAXLEN 1000` will make sure that the command performs only 1000 comparisons, effectively running the algorithm on a subset of the list (the first part or the last part depending on the fact we use a positive or negative rank). This is useful to limit the maximum complexity of the command. It is also useful when we expect the match to be found very early, but want to be sure that in case this is not true, the command does not take too much time to run. When `MAXLEN` is used, it is possible to specify 0 as the maximum number of comparisons, as a way to tell the command we want unlimited comparisons. This is better than giving a very large `MAXLEN` option because it is more general. Return ------ The command returns the integer representing the matching element, or `nil` if there is no match. However, if the `COUNT` option is given the command returns an array (empty if there are no matches). Examples -------- ``` RPUSH mylist a b c d 1 2 3 4 3 3 3 LPOS mylist 3 LPOS mylist 3 COUNT 0 RANK 2 ``` redis LATENCY LATENCY ======= ``` LATENCY HISTOGRAM ``` Syntax ``` LATENCY HISTOGRAM [command [command ...]] ``` Available since: 7.0.0 Time complexity: O(N) where N is the number of commands with latency information being retrieved. ACL categories: `@admin`, `@slow`, `@dangerous`, `LATENCY HISTOGRAM` returns a cumulative distribution of commands' latencies in histogram format. By default, all available latency histograms are returned. You can filter the reply by providing specific command names. Each histogram consists of the following fields: * Command name * The total calls for that command * A map of time buckets: + Each bucket represents a latency range + Each bucket covers twice the previous bucket's range + Empty buckets are excluded from the reply + The tracked latencies are between 1 microsecond and roughly 1 second + Everything above 1 second is considered +Inf + At max, there will be log2(1,000,000,000)=30 buckets This command requires the extended latency monitoring feature to be enabled, which is the default. If you need to enable it, call `CONFIG SET latency-tracking yes`. To delete the latency histograms' data use the [`CONFIG RESETSTAT`](../config-resetstat) command. Examples -------- ``` 127.0.0.1:6379> LATENCY HISTOGRAM set 1# "set" => 1# "calls" => (integer) 100000 2# "histogram_usec" => 1# (integer) 1 => (integer) 99583 2# (integer) 2 => (integer) 99852 3# (integer) 4 => (integer) 99914 4# (integer) 8 => (integer) 99940 5# (integer) 16 => (integer) 99968 6# (integer) 33 => (integer) 100000 ``` Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): specifically: The command returns a map where each key is a command name. The value is a map with a key for the total calls, and a map of the histogram time buckets. redis CLUSTER CLUSTER ======= ``` CLUSTER SETSLOT ``` Syntax ``` CLUSTER SETSLOT slot <IMPORTING node-id | MIGRATING node-id | NODE node-id | STABLE> ``` Available since: 3.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, `CLUSTER SETSLOT` is responsible of changing the state of a hash slot in the receiving node in different ways. It can, depending on the subcommand used: 1. `MIGRATING` subcommand: Set a hash slot in *migrating* state. 2. `IMPORTING` subcommand: Set a hash slot in *importing* state. 3. `STABLE` subcommand: Clear any importing / migrating state from hash slot. 4. `NODE` subcommand: Bind the hash slot to a different node. The command with its set of subcommands is useful in order to start and end cluster live resharding operations, which are accomplished by setting a hash slot in migrating state in the source node, and importing state in the destination node. Each subcommand is documented below. At the end you'll find a description of how live resharding is performed using this command and other related commands. CLUSTER SETSLOT `<slot>` MIGRATING `<destination-node-id>` ---------------------------------------------------------- This subcommand sets a slot to *migrating* state. In order to set a slot in this state, the node receiving the command must be the hash slot owner, otherwise an error is returned. When a slot is set in migrating state, the node changes behavior in the following way: 1. If a command is received about an existing key, the command is processed as usually. 2. If a command is received about a key that does not exists, an `ASK` redirection is emitted by the node, asking the client to retry only that specific query into `destination-node`. In this case the client should not update its hash slot to node mapping. 3. If the command contains multiple keys, in case none exist, the behavior is the same as point 2, if all exist, it is the same as point 1, however if only a partial number of keys exist, the command emits a `TRYAGAIN` error in order for the keys interested to finish being migrated to the target node, so that the multi keys command can be executed. CLUSTER SETSLOT `<slot>` IMPORTING `<source-node-id>` ----------------------------------------------------- This subcommand is the reverse of `MIGRATING`, and prepares the destination node to import keys from the specified source node. The command only works if the node is not already owner of the specified hash slot. When a slot is set in importing state, the node changes behavior in the following way: 1. Commands about this hash slot are refused and a `MOVED` redirection is generated as usually, but in the case the command follows an [`ASKING`](../asking) command, in this case the command is executed. In this way when a node in migrating state generates an `ASK` redirection, the client contacts the target node, sends [`ASKING`](../asking), and immediately after sends the command. This way commands about non-existing keys in the old node or keys already migrated to the target node are executed in the target node, so that: 1. New keys are always created in the target node. During a hash slot migration we'll have to move only old keys, not new ones. 2. Commands about keys already migrated are correctly processed in the context of the node which is the target of the migration, the new hash slot owner, in order to guarantee consistency. 3. Without [`ASKING`](../asking) the behavior is the same as usually. This guarantees that clients with a broken hash slots mapping will not write for error in the target node, creating a new version of a key that has yet to be migrated. CLUSTER SETSLOT `<slot>` STABLE ------------------------------- This subcommand just clears migrating / importing state from the slot. It is mainly used to fix a cluster stuck in a wrong state by `redis-cli --cluster fix`. Normally the two states are cleared automatically at the end of the migration using the `SETSLOT ... NODE ...` subcommand as explained in the next section. CLUSTER SETSLOT `<slot>` NODE `<node-id>` ----------------------------------------- The `NODE` subcommand is the one with the most complex semantics. It associates the hash slot with the specified node, however the command works only in specific situations and has different side effects depending on the slot state. The following is the set of pre-conditions and side effects of the command: 1. If the current hash slot owner is the node receiving the command, but for effect of the command the slot would be assigned to a different node, the command will return an error if there are still keys for that hash slot in the node receiving the command. 2. If the slot is in *migrating* state, the state gets cleared when the slot is assigned to another node. 3. If the slot was in *importing* state in the node receiving the command, and the command assigns the slot to this node (which happens in the target node at the end of the resharding of a hash slot from one node to another), the command has the following side effects: A) the *importing* state is cleared. B) If the node config epoch is not already the greatest of the cluster, it generates a new one and assigns the new config epoch to itself. This way its new hash slot ownership will win over any past configuration created by previous failovers or slot migrations. It is important to note that step 3 is the only time when a Redis Cluster node will create a new config epoch without agreement from other nodes. This only happens when a manual configuration is operated. However it is impossible that this creates a non-transient setup where two nodes have the same config epoch, since Redis Cluster uses a config epoch collision resolution algorithm. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): All the subcommands return `OK` if the command was successful. Otherwise an error is returned. Redis Cluster live resharding explained --------------------------------------- The `CLUSTER SETSLOT` command is an important piece used by Redis Cluster in order to migrate all the keys contained in one hash slot from one node to another. This is how the migration is orchestrated, with the help of other commands as well. We'll call the node that has the current ownership of the hash slot the `source` node, and the node where we want to migrate the `destination` node. 1. Set the destination node slot to *importing* state using `CLUSTER SETSLOT <slot> IMPORTING <source-node-id>`. 2. Set the source node slot to *migrating* state using `CLUSTER SETSLOT <slot> MIGRATING <destination-node-id>`. 3. Get keys from the source node with [`CLUSTER GETKEYSINSLOT`](../cluster-getkeysinslot) command and move them into the destination node using the [`MIGRATE`](../migrate) command. 4. Send `CLUSTER SETSLOT <slot> NODE <destination-node-id>` to the destination node. 5. Send `CLUSTER SETSLOT <slot> NODE <destination-node-id>` to the source node. 6. Send `CLUSTER SETSLOT <slot> NODE <destination-node-id>` to the other master nodes (optional). Notes: * The order of step 1 and 2 is important. We want the destination node to be ready to accept `ASK` redirections when the source node is configured to redirect. * The order of step 4 and 5 is important. The destination node is responsible for propagating the change to the rest of the cluster. If the source node is informed before the destination node and the destination node crashes before it is set as new slot owner, the slot is left with no owner, even after a successful failover. * Step 6, sending `SETSLOT` to the nodes not involved in the resharding, is not technically necessary since the configuration will eventually propagate itself. However, it is a good idea to do so in order to stop nodes from pointing to the wrong node for the hash slot moved as soon as possible, resulting in less redirections to find the right node. redis FUNCTION FUNCTION ======== ``` FUNCTION LOAD ``` Syntax ``` FUNCTION LOAD [REPLACE] function-code ``` Available since: 7.0.0 Time complexity: O(1) (considering compilation time is redundant) ACL categories: `@write`, `@slow`, `@scripting`, Load a library to Redis. The command's gets a single mandatory parameter which is the source code that implements the library. The library payload must start with Shebang statement that provides a metadata about the library (like the engine to use and the library name). Shebang format: `#!<engine name> name=<library name>`. Currently engine name must be `lua`. For the Lua engine, the implementation should declare one or more entry points to the library with the [`redis.register_function()` API](https://redis.io/topics/lua-api#redis.register_function). Once loaded, you can call the functions in the library with the [`FCALL`](../fcall) (or [`FCALL_RO`](../fcall_ro) when applicable) command. When attempting to load a library with a name that already exists, the Redis server returns an error. The `REPLACE` modifier changes this behavior and overwrites the existing library with the new contents. The command will return an error in the following circumstances: * An invalid *engine-name* was provided. * The library's name already exists without the `REPLACE` modifier. * A function in the library is created with a name that already exists in another library (even when `REPLACE` is specified). * The engine failed in creating the library's functions (due to a compilation error, for example). * No functions were declared by the library. For more information please refer to [Introduction to Redis Functions](https://redis.io/topics/functions-intro). Return ------ @string - the library name that was loaded Examples -------- The following example will create a library named `mylib` with a single function, `myfunc`, that returns the first argument it gets. ``` redis> FUNCTION LOAD "#!lua name=mylib \n redis.register_function('myfunc', function(keys, args) return args[1] end)" mylib redis> FCALL myfunc 0 hello "hello" ```
programming_docs
redis GRAPH.CONFIG GRAPH.CONFIG ============ ``` GRAPH.CONFIG GET ``` Syntax ``` GRAPH.CONFIG GET name ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Graph 2.2.11](https://redis.io/docs/stack/graph) Time complexity: Retrieves the current value of a RedisGraph configuration parameter. RedisGraph configuration parameters are detailed [here](https://redis.io/docs/stack/graph/configuration). `*` can be used to retrieve the value of all RedisGraph configuration parameters. ``` 127.0.0.1:6379> graph.config get * 1) 1) "TIMEOUT" 2) (integer) 0 2) 1) "CACHE_SIZE" 2) (integer) 25 3) 1) "ASYNC_DELETE" 2) (integer) 1 4) 1) "OMP_THREAD_COUNT" 2) (integer) 8 5) 1) "THREAD_COUNT" 2) (integer) 8 6) 1) "RESULTSET_SIZE" 2) (integer) -1 7) 1) "VKEY_MAX_ENTITY_COUNT" 2) (integer) 100000 8) 1) "MAX_QUEUED_QUERIES" 2) (integer) 4294967295 9) 1) "QUERY_MEM_CAPACITY" 2) (integer) 0 10) 1) "DELTA_MAX_PENDING_CHANGES" 2) (integer) 10000 11) 1) "NODE_CREATION_BUFFER" 2) (integer) 16384 ``` ``` 127.0.0.1:6379> graph.config get TIMEOUT 1) "TIMEOUT" 2) (integer) 0 ``` redis READONLY READONLY ======== ``` READONLY ``` Syntax ``` READONLY ``` Available since: 3.0.0 Time complexity: O(1) ACL categories: `@fast`, `@connection`, Enables read queries for a connection to a Redis Cluster replica node. Normally replica nodes will redirect clients to the authoritative master for the hash slot involved in a given command, however clients can use replicas in order to scale reads using the `READONLY` command. `READONLY` tells a Redis Cluster replica node that the client is willing to read possibly stale data and is not interested in running write queries. When the connection is in readonly mode, the cluster will send a redirection to the client only if the operation involves keys not served by the replica's master node. This may happen because: 1. The client sent a command about hash slots never served by the master of this replica. 2. The cluster was reconfigured (for example resharded) and the replica is no longer able to serve commands for a given hash slot. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) redis TDIGEST.TRIMMED_MEAN TDIGEST.TRIMMED\_MEAN ===================== ``` TDIGEST.TRIMMED_MEAN ``` Syntax ``` TDIGEST.TRIMMED_MEAN key low_cut_quantile high_cut_quantile ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.4.0](https://redis.io/docs/stack/bloom) Time complexity: O(N) where N is the number of centroids Returns an estimation of the mean value from the sketch, excluding observation values outside the low and high cutoff quantiles. Required arguments ------------------ `key` is key name for an existing t-digest sketch. `low_cut_quantile` Foating-point value in the range [0..1], should be lower than `high_cut_quantile` When equal to 0: No low cut. When higher than 0: Exclude observation values lower than this quantile. `high_cut_quantile` Floating-point value in the range [0..1], should be higher than `low_cut_quantile` When lower than 1: Exclude observation values higher than or equal to this quantile. When equal to 1: No high cut. Return value ------------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) estimation of the mean value. 'nan' if the sketch is empty. Examples -------- ``` redis> TDIGEST.CREATE t COMPRESSION 1000 OK redis> TDIGEST.ADD t 1 2 3 4 5 6 7 8 9 10 OK redis> TDIGEST.TRIMMED\_MEAN t 0.1 0.6 "4" redis> TDIGEST.TRIMMED\_MEAN t 0.3 0.9 "6.5" redis> TDIGEST.TRIMMED\_MEAN t 0 1 "5.5" ``` redis TS.GET TS.GET ====== ``` TS.GET ``` Syntax ``` TS.GET key [LATEST] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [TimeSeries 1.0.0](https://redis.io/docs/stack/timeseries) Time complexity: O(1) Get the sample with the highest timestamp from a given time series [Examples](#examples) Required arguments ------------------ `key` is key name for the time series. Optional arguments ------------------ `LATEST` (since RedisTimeSeries v1.8) is used when a time series is a compaction. With `LATEST`, TS.GET reports the compacted value of the latest, possibly partial, bucket. Without `LATEST`, TS.GET does not report the latest, possibly partial, bucket. When a time series is not a compaction, `LATEST` is ignored. The data in the latest bucket of a compaction is possibly partial. A bucket is *closed* and compacted only upon arrival of a new sample that *opens* a new *latest* bucket. There are cases, however, when the compacted value of the latest, possibly partial, bucket is also required. In such a case, use `LATEST`. Return value ------------ One of: * [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of a single ([Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings)) pair representing (timestamp, value(double)) of the sample with the highest timetamp * An empty [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) - when the time series is empty * [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) (e.g., when the key does not exist or when the number of arguments is wrong) Examples -------- **Get latest measured temperature for a city** Create a time series to store the temperatures measured in Tel Aviv and add four measurements for Sun Jan 01 2023 ``` 127.0.0.1:6379> TS.CREATE temp:TLV LABELS type temp location TLV OK 127.0.0.1:6379> TS.MADD temp:TLV 1672534800 12 temp:TLV 1672556400 16 temp:TLV 1672578000 21 temp:TLV 1672599600 14 ``` Next, get the latest measured temperature (the temperature with the highest timestamp) ``` 127.0.0.1:6379> TS.GET temp:TLV 1) (integer) 1672599600 2) 14 ``` **Get latest maximal daily temperature for a city** Create a time series to store the temperatures measured in Jerusalem ``` 127.0.0.1:6379> TS.CREATE temp:JLM LABELS type temp location JLM OK ``` Next, create a compacted time series named *dailyAvgTemp:JLM* containing one compacted sample per 24 hours: the maximum of all measurements taken from midnight to next midnight. ``` 127.0.0.1:6379> TS.CREATE dailyMaxTemp:JLM LABELS type temp location JLM OK 127.0.0.1:6379> TS.CREATERULE temp:JLM dailyMaxTemp:JLM AGGREGATION max 86400000 OK ``` Add four measurements for Sun Jan 01 2023 and three measurements for Mon Jan 02 2023 ``` 127.0.0.1:6379> TS.MADD temp:JLM 1672534800000 12 temp:JLM 1672556400000 16 temp:JLM 1672578000000 21 temp:JLM 1672599600000 14 1) (integer) 1672534800000 2) (integer) 1672556400000 3) (integer) 1672578000000 4) (integer) 1672599600000 127.0.0.1:6379> TS.MADD temp:JLM 1672621200000 11 temp:JLM 1672642800000 21 temp:JLM 1672664400000 26 1) (integer) 1672621200000 2) (integer) 1672642800000 3) (integer) 1672664400000 ``` Next, get the latest maximum daily temperature; do not report the latest, possibly partial, bucket ``` 127.0.0.1:6379> TS.GET dailyMaxTemp:JLM 1) (integer) 1672531200000 2) 21 ``` Get the latest maximum daily temperature (the temperature with the highest timestamp); report the latest, possibly partial, bucket ``` 127.0.0.1:6379> TS.GET dailyMaxTemp:JLM LATEST 1) (integer) 1672617600000 2) 26 ``` See also -------- [`TS.MGET`](../ts.mget) Related topics -------------- [RedisTimeSeries](https://redis.io/docs/stack/timeseries) redis LRANGE LRANGE ====== ``` LRANGE ``` Syntax ``` LRANGE key start stop ``` Available since: 1.0.0 Time complexity: O(S+N) where S is the distance of start offset from HEAD for small lists, from nearest end (HEAD or TAIL) for large lists; and N is the number of elements in the specified range. ACL categories: `@read`, `@list`, `@slow`, Returns the specified elements of the list stored at `key`. The offsets `start` and `stop` are zero-based indexes, with `0` being the first element of the list (the head of the list), `1` being the next element and so on. These offsets can also be negative numbers indicating offsets starting at the end of the list. For example, `-1` is the last element of the list, `-2` the penultimate, and so on. Consistency with range functions in various programming languages ----------------------------------------------------------------- Note that if you have a list of numbers from 0 to 100, `LRANGE list 0 10` will return 11 elements, that is, the rightmost item is included. This **may or may not** be consistent with behavior of range-related functions in your programming language of choice (think Ruby's `Range.new`, `Array#slice` or Python's `range()` function). Out-of-range indexes -------------------- Out of range indexes will not produce an error. If `start` is larger than the end of the list, an empty list is returned. If `stop` is larger than the actual end of the list, Redis will treat it like the last element of the list. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of elements in the specified range. Examples -------- ``` RPUSH mylist "one" RPUSH mylist "two" RPUSH mylist "three" LRANGE mylist 0 0 LRANGE mylist -3 2 LRANGE mylist -100 100 LRANGE mylist 5 10 ``` redis MOVE MOVE ==== ``` MOVE ``` Syntax ``` MOVE key db ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@keyspace`, `@write`, `@fast`, Move `key` from the currently selected database (see [`SELECT`](../select)) to the specified destination database. When `key` already exists in the destination database, or it does not exist in the source database, it does nothing. It is possible to use `MOVE` as a locking primitive because of this. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: * `1` if `key` was moved. * `0` if `key` was not moved. redis DBSIZE DBSIZE ====== ``` DBSIZE ``` Syntax ``` DBSIZE ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@keyspace`, `@read`, `@fast`, Return the number of keys in the currently-selected database. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) redis TOPK.LIST TOPK.LIST ========= ``` TOPK.LIST ``` Syntax ``` TOPK.LIST key [WITHCOUNT] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(k) where k is the value of top-k Return full list of items in Top K list. ### Parameters * **key**: Name of sketch where item is counted. * **WITHCOUNT**: Count of each element is returned. Return ------ k (or less) items in Top K list. [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) - the names of items in the TopK list. If `WITHCOUNT` is requested, [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) and [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) pairs of the names of items in the TopK list and their count. Examples -------- ``` TOPK.LIST topk 1) foo 2) 42 3) bar ``` ``` TOPK.LIST topk WITHCOUNT 1) foo 2) (integer) 12 3) 42 4) (integer) 7 5) bar 6) (integer) 2 ``` redis CLIENT CLIENT ====== ``` CLIENT NO-EVICT ``` Syntax ``` CLIENT NO-EVICT <ON | OFF> ``` Available since: 7.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, `@connection`, The `CLIENT NO-EVICT` command sets the [client eviction](https://redis.io/topics/clients#client-eviction) mode for the current connection. When turned on and client eviction is configured, the current connection will be excluded from the client eviction process even if we're above the configured client eviction threshold. When turned off, the current client will be re-included in the pool of potential clients to be evicted (and evicted if needed). See [client eviction](https://redis.io/topics/clients#client-eviction) for more details. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK`. redis OBJECT OBJECT ====== ``` OBJECT ENCODING ``` Syntax ``` OBJECT ENCODING key ``` Available since: 2.2.3 Time complexity: O(1) ACL categories: `@keyspace`, `@read`, `@slow`, Returns the internal encoding for the Redis object stored at `<key>` Redis objects can be encoded in different ways: * Strings can be encoded as: + `raw`, normal string encoding. + `int`, strings representing integers in a 64-bit signed interval, encoded in this way to save space. + `embstr`, an embedded string, which is an object where the internal simple dynamic string, `sds`, is an unmodifiable string allocated in the same chuck as the object itself. `embstr` can be strings with lengths up to the hardcoded limit of `OBJ_ENCODING_EMBSTR_SIZE_LIMIT` or 44 bytes. * Lists can be encoded as `ziplist` or `linkedlist`. The `ziplist` is the special representation that is used to save space for small lists. * Sets can be encoded as `intset` or `hashtable`. The `intset` is a special encoding used for small sets composed solely of integers. * Hashes can be encoded as `ziplist` or `hashtable`. The `ziplist` is a special encoding used for small hashes. * Sorted Sets can be encoded as `ziplist` or `skiplist` format. As for the List type small sorted sets can be specially encoded using `ziplist`, while the `skiplist` encoding is the one that works with sorted sets of any size. All the specially encoded types are automatically converted to the general type once you perform an operation that makes it impossible for Redis to retain the space saving encoding. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the encoding of the object, or `nil` if the key doesn't exist redis CF.INSERTNX CF.INSERTNX =========== ``` CF.INSERTNX ``` Syntax ``` CF.INSERTNX key [CAPACITY capacity] [NOCREATE] ITEMS item [item ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(n \* (k + i)), where n is the number of items, k is the number of sub-filters and i is maxIterations CF.INSERT --------- CF.INSERTNX ----------- Note: `CF.INSERTNX` is an advanced command that can have unintended impact if used incorrectly. ``` CF.INSERT {key} [CAPACITY {capacity}] [NOCREATE] ITEMS {item ...} CF.INSERTNX {key} [CAPACITY {capacity}] [NOCREATE] ITEMS {item ...} ``` ### Description Adds one or more items to a cuckoo filter, allowing the filter to be created with a custom capacity if it does not exist yet. This command is equivalent to a [`CF.EXISTS`](../cf.exists) + [`CF.ADD`](../cf.add) command. It does not insert an element into the filter if its fingerprint already exists and therefore better utilizes the available capacity. However, if you delete elements it might introduce **false negative** error rate! These commands offers more flexibility over the `ADD` and `ADDNX` commands, at the cost of more verbosity. ### Parameters * **key**: The name of the filter * **capacity**: Specifies the desired capacity of the new filter, if this filter does not exist yet. If the filter already exists, then this parameter is ignored. If the filter does not exist yet and this parameter is *not* specified, then the filter is created with the module-level default capacity which is 1024. See [`CF.RESERVE`](../cf.reserve) for more information on cuckoo filter capacities. * **NOCREATE**: If specified, prevents automatic filter creation if the filter does not exist. Instead, an error is returned if the filter does not already exist. This option is mutually exclusive with `CAPACITY`. * **item**: One or more items to add. The `ITEMS` keyword must precede the list of items to add. ### Complexity O(n + i), where n is the number of `sub-filters` and i is `maxIterations`. Adding items requires up to 2 memory accesses per `sub-filter`. But as the filter fills up, both locations for an item might be full. The filter attempts to `Cuckoo` swap items up to `maxIterations` times. ### Returns An array of booleans (as integers) corresponding to the items specified. Possible values for each element are: * `> 0` if the item was successfully inserted * `0` if the item already existed *and* `INSERTNX` is used. * `<0` if an error occurred Note that for [`CF.INSERT`](../cf.insert), the return value is always be an array of `>0` values, unless an error occurs. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - where "1" means the item has been added to the filter, and "0" mean, the item already existed. [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) when filter parameters are erroneous Examples -------- ``` redis> CF.INSERTNX cf CAPACITY 1000 ITEMS item1 item2 1) (integer) 1 2) (integer) 1 ``` ``` redis> CF.INSERTNX cf CAPACITY 1000 ITEMS item1 item2 item3 1) (integer) 0 2) (integer) 0 3) (integer) 1 ``` ``` redis> CF.INSERTNX cf_new CAPACITY 1000 NOCREATE ITEMS item1 item2 (error) ERR not found ``` redis GRAPH.EXPLAIN GRAPH.EXPLAIN ============= ``` GRAPH.EXPLAIN ``` Syntax ``` GRAPH.EXPLAIN graph query ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Graph 2.0.0](https://redis.io/docs/stack/graph) Time complexity: Constructs a query execution plan but does not run it. Inspect this execution plan to better understand how your query will get executed. Arguments: `Graph name, Query` Returns: `String representation of a query execution plan` ``` GRAPH.EXPLAIN us\_government "MATCH (p:President)-[:BORN]->(h:State {name:'Hawaii'}) RETURN p" ``` redis FT.SYNUPDATE FT.SYNUPDATE ============ ``` FT.SYNUPDATE ``` Syntax ``` FT.SYNUPDATE index synonym_group_id [SKIPINITIALSCAN] term [term ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.2.0](https://redis.io/docs/stack/search) Time complexity: O(1) Update a synonym group [Examples](#examples) Required arguments ------------------ `index` is index name. `synonym_group_id` is synonym group to return. Use FT.SYNUPDATE to create or update a synonym group with additional terms. The command triggers a scan of all documents. Optional parameters ------------------- `SKIPINITIALSCAN` does not scan and index, and only documents that are indexed after the update are affected. Return ------ FT.SYNUPDATE returns a simple string reply `OK` if executed correctly, or an error reply otherwise. Examples -------- **Update a synonym group** ``` 127.0.0.1:6379> FT.SYNUPDATE idx synonym hello hi shalom OK ``` ``` 127.0.0.1:6379> FT.SYNUPDATE idx synonym SKIPINITIALSCAN hello hi shalom OK ``` See also -------- [`FT.SYNDUMP`](../ft.syndump) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) redis READWRITE READWRITE ========= ``` READWRITE ``` Syntax ``` READWRITE ``` Available since: 3.0.0 Time complexity: O(1) ACL categories: `@fast`, `@connection`, Disables read queries for a connection to a Redis Cluster replica node. Read queries against a Redis Cluster replica node are disabled by default, but you can use the [`READONLY`](../readonly) command to change this behavior on a per- connection basis. The `READWRITE` command resets the readonly mode flag of a connection back to readwrite. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) redis PFDEBUG PFDEBUG ======= ``` PFDEBUG ``` Syntax ``` PFDEBUG subcommand key ``` Available since: 2.8.9 Time complexity: N/A ACL categories: `@write`, `@hyperloglog`, `@admin`, `@slow`, `@dangerous`, The `PFDEBUG` command is an internal command. It is meant to be used for developing and testing Redis.
programming_docs
redis SINTER SINTER ====== ``` SINTER ``` Syntax ``` SINTER key [key ...] ``` Available since: 1.0.0 Time complexity: O(N\*M) worst case where N is the cardinality of the smallest set and M is the number of sets. ACL categories: `@read`, `@set`, `@slow`, Returns the members of the set resulting from the intersection of all the given sets. For example: ``` key1 = {a,b,c,d} key2 = {c} key3 = {a,c,e} SINTER key1 key2 key3 = {c} ``` Keys that do not exist are considered to be empty sets. With one of the keys being an empty set, the resulting set is also empty (since set intersection with an empty set always results in an empty set). Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list with members of the resulting set. Examples -------- ``` SADD key1 "a" SADD key1 "b" SADD key1 "c" SADD key2 "c" SADD key2 "d" SADD key2 "e" SINTER key1 key2 ``` redis SCRIPT SCRIPT ====== ``` SCRIPT DEBUG ``` Syntax ``` SCRIPT DEBUG <YES | SYNC | NO> ``` Available since: 3.2.0 Time complexity: O(1) ACL categories: `@slow`, `@scripting`, Set the debug mode for subsequent scripts executed with [`EVAL`](../eval). Redis includes a complete Lua debugger, codename LDB, that can be used to make the task of writing complex scripts much simpler. In debug mode Redis acts as a remote debugging server and a client, such as `redis-cli`, can execute scripts step by step, set breakpoints, inspect variables and more - for additional information about LDB refer to the [Redis Lua debugger](https://redis.io/topics/ldb) page. **Important note:** avoid debugging Lua scripts using your Redis production server. Use a development server instead. LDB can be enabled in one of two modes: asynchronous or synchronous. In asynchronous mode the server creates a forked debugging session that does not block and all changes to the data are **rolled back** after the session finishes, so debugging can be restarted using the same initial state. The alternative synchronous debug mode blocks the server while the debugging session is active and retains all changes to the data set once it ends. * `YES`. Enable non-blocking asynchronous debugging of Lua scripts (changes are discarded). * `SYNC`. Enable blocking synchronous debugging of Lua scripts (saves changes to data). * `NO`. Disables scripts debug mode. For more information about [`EVAL`](../eval) scripts please refer to [Introduction to Eval Scripts](https://redis.io/topics/eval-intro). Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK`. redis ZSCAN ZSCAN ===== ``` ZSCAN ``` Syntax ``` ZSCAN key cursor [MATCH pattern] [COUNT count] ``` Available since: 2.8.0 Time complexity: O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection. ACL categories: `@read`, `@sortedset`, `@slow`, See [`SCAN`](../scan) for `ZSCAN` documentation. redis FCALL_RO FCALL\_RO ========= ``` FCALL_RO ``` Syntax ``` FCALL_RO function numkeys [key [key ...]] [arg [arg ...]] ``` Available since: 7.0.0 Time complexity: Depends on the function that is executed. ACL categories: `@slow`, `@scripting`, This is a read-only variant of the [`FCALL`](../fcall) command that cannot execute commands that modify data. For more information about when to use this command vs [`FCALL`](../fcall), please refer to [Read-only scripts](https://redis.io/docs/manual/programmability/#read-only_scripts). For more information please refer to [Introduction to Redis Functions](https://redis.io/topics/functions-intro). redis TOPK.COUNT TOPK.COUNT ========== ``` TOPK.COUNT (deprecated) ``` As of Redis version 2.4, this command is regarded as deprecated. Syntax ``` TOPK.COUNT key item [item ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(n) where n is the number of items Returns count for an item. Multiple items can be requested at once. Please note this number will never be higher than the real count and likely to be lower. This command has been deprecated. The count value is not a representative of the number of appearances of an item. ### Parameters * **key**: Name of sketch where item is counted. * **item**: Item/s to be counted. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - count for responding item. Examples -------- ``` redis> TOPK.COUNT topk foo 42 nonexist 1) (integer) 3 2) (integer) 1 3) (integer) 0 ``` redis SPUBLISH SPUBLISH ======== ``` SPUBLISH ``` Syntax ``` SPUBLISH shardchannel message ``` Available since: 7.0.0 Time complexity: O(N) where N is the number of clients subscribed to the receiving shard channel. ACL categories: `@pubsub`, `@fast`, Posts a message to the given shard channel. In Redis Cluster, shard channels are assigned to slots by the same algorithm used to assign keys to slots. A shard message must be sent to a node that own the slot the shard channel is hashed to. The cluster makes sure that published shard messages are forwarded to all the node in the shard, so clients can subscribe to a shard channel by connecting to any one of the nodes in the shard. For more information about sharded pubsub, see [Sharded Pubsub](https://redis.io/topics/pubsub#sharded-pubsub). Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of clients that received the message. Note that in a Redis Cluster, only clients that are connected to the same node as the publishing client are included in the count. Examples -------- For example the following command publish to channel `orders` with a subscriber already waiting for message(s). ``` > spublish orders hello (integer) 1 ``` redis CLIENT CLIENT ====== ``` CLIENT UNBLOCK ``` Syntax ``` CLIENT UNBLOCK client-id [TIMEOUT | ERROR] ``` Available since: 5.0.0 Time complexity: O(log N) where N is the number of client connections ACL categories: `@admin`, `@slow`, `@dangerous`, `@connection`, This command can unblock, from a different connection, a client blocked in a blocking operation, such as for instance [`BRPOP`](../brpop) or [`XREAD`](../xread) or [`WAIT`](../wait). By default the client is unblocked as if the timeout of the command was reached, however if an additional (and optional) argument is passed, it is possible to specify the unblocking behavior, that can be **TIMEOUT** (the default) or **ERROR**. If **ERROR** is specified, the behavior is to unblock the client returning as error the fact that the client was force-unblocked. Specifically the client will receive the following error: ``` -UNBLOCKED client unblocked via CLIENT UNBLOCK ``` Note: of course as usually it is not guaranteed that the error text remains the same, however the error code will remain `-UNBLOCKED`. This command is useful especially when we are monitoring many keys with a limited number of connections. For instance we may want to monitor multiple streams with [`XREAD`](../xread) without using more than N connections. However at some point the consumer process is informed that there is one more stream key to monitor. In order to avoid using more connections, the best behavior would be to stop the blocking command from one of the connections in the pool, add the new key, and issue the blocking command again. To obtain this behavior the following pattern is used. The process uses an additional *control connection* in order to send the `CLIENT UNBLOCK` command if needed. In the meantime, before running the blocking operation on the other connections, the process runs [`CLIENT ID`](../client-id) in order to get the ID associated with that connection. When a new key should be added, or when a key should no longer be monitored, the relevant connection blocking command is aborted by sending `CLIENT UNBLOCK` in the control connection. The blocking command will return and can be finally reissued. This example shows the application in the context of Redis streams, however the pattern is a general one and can be applied to other cases. Examples -------- ``` Connection A (blocking connection): > CLIENT ID 2934 > BRPOP key1 key2 key3 0 (client is blocked) ... Now we want to add a new key ... Connection B (control connection): > CLIENT UNBLOCK 2934 1 Connection A (blocking connection): ... BRPOP reply with timeout ... NULL > BRPOP key1 key2 key3 key4 0 (client is blocked again) ``` Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: * `1` if the client was unblocked successfully. * `0` if the client wasn't unblocked. redis EVALSHA EVALSHA ======= ``` EVALSHA ``` Syntax ``` EVALSHA sha1 numkeys [key [key ...]] [arg [arg ...]] ``` Available since: 2.6.0 Time complexity: Depends on the script that is executed. ACL categories: `@slow`, `@scripting`, Evaluate a script from the server's cache by its SHA1 digest. The server caches scripts by using the [`SCRIPT LOAD`](../script-load) command. The command is otherwise identical to [`EVAL`](../eval). Please refer to the [Redis Programmability](https://redis.io/topics/programmability) and [Introduction to Eval Scripts](https://redis.io/topics/eval-intro) for more information about Lua scripts. redis TDIGEST.RANK TDIGEST.RANK ============ ``` TDIGEST.RANK ``` Syntax ``` TDIGEST.RANK key value [value ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.4.0](https://redis.io/docs/stack/bloom) Time complexity: O(1) Returns, for each input value (floating-point), the estimated rank of the value (the number of observations in the sketch that are smaller than the value + half the number of observations that are equal to the value). Multiple ranks can be retrieved in a signle call. Required arguments ------------------ `key` is key name for an existing t-digest sketch. `value` is input value for which the rank should be estimated. Return value ------------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) - an array of integers populated with rank\_1, rank\_2, ..., rank\_V: * -1 - when `value` is smaller than the value of the smallest observation. * The number of observations - when `value` is larger than the value of the largest observation. * Otherwise: an estimation of the number of (observations smaller than `value` + half the observations equal to `value`). 0 is the rank of the value of the smallest observation. *n*-1 is the rank of the value of the largest observation; *n* denotes the number of observations added to the sketch. All values are -2 if the sketch is empty. Examples -------- ``` redis> TDIGEST.CREATE s COMPRESSION 1000 OK redis> TDIGEST.ADD s 10 20 30 40 50 60 OK redis> TDIGEST.RANK s 0 10 20 30 40 50 60 70 1) (integer) -1 2) (integer) 0 3) (integer) 1 4) (integer) 2 5) (integer) 3 6) (integer) 4 7) (integer) 5 8) (integer) 6 redis> TDIGEST.REVRANK s 0 10 20 30 40 50 60 70 1) (integer) 6 2) (integer) 5 3) (integer) 4 4) (integer) 3 5) (integer) 2 6) (integer) 1 7) (integer) 0 8) (integer) -1 ``` ``` redis> TDIGEST.CREATE s COMPRESSION 1000 OK redis> TDIGEST.ADD s 10 10 10 10 20 20 OK redis> TDIGEST.RANK s 10 20 1) (integer) 2 2) (integer) 5 redis> TDIGEST.REVRANK s 10 20 1) (integer) 4 2) (integer) 1 ``` redis ACL ACL === ``` ACL WHOAMI ``` Syntax ``` ACL WHOAMI ``` Available since: 6.0.0 Time complexity: O(1) ACL categories: `@slow`, Return the username the current connection is authenticated with. New connections are authenticated with the "default" user. They can change user using [`AUTH`](../auth). Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the username of the current connection. Examples -------- ``` > ACL WHOAMI "default" ``` redis SLOWLOG SLOWLOG ======= ``` SLOWLOG LEN ``` Syntax ``` SLOWLOG LEN ``` Available since: 2.2.12 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, This command returns the current number of entries in the slow log. A new entry is added to the slow log whenever a command exceeds the execution time threshold defined by the `slowlog-log-slower-than` configuration directive. The maximum number of entries in the slow log is governed by the `slowlog-max-len` configuration directive. Once the slog log reaches its maximal size, the oldest entry is removed whenever a new entry is created. The slow log can be cleared with the [`SLOWLOG RESET`](../slowlog-reset) command. @reply [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) The number of entries in the slow log. redis FT.ALTER FT.ALTER ======== ``` FT.ALTER ``` Syntax ``` FT.ALTER {index} [SKIPINITIALSCAN] SCHEMA ADD {attribute} {options} ... ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.0.0](https://redis.io/docs/stack/search) Time complexity: O(N) where N is the number of keys in the keyspace Add a new attribute to the index. Adding an attribute to the index causes any future document updates to use the new attribute when indexing and reindexing existing documents. [Examples](#examples) Required arguments ------------------ `index` is index name to create. `SKIPINITIALSCAN` if set, does not scan and index. `SCHEMA ADD {attribute} {options} ...` after the SCHEMA keyword, declares which fields to add: * `attribute` is attribute to add. * `options` are attribute options. Refer to [`FT.CREATE`](../ft.create) for more information. **Note:** Depending on how the index was created, you may be limited by the number of additional text attributes which can be added to an existing index. If the current index contains fewer than 32 text attributes, then `SCHEMA ADD` will only be able to add attributes up to 32 total attributes (meaning that the index will only ever be able to contain 32 total text attributes). If you wish for the index to contain more than 32 attributes, create it with the `MAXTEXTFIELDS` option. Return ------ FT.CREATE returns a simple string reply `OK` if executed correctly, or an error reply otherwise. Examples -------- **Alter an index** ``` 127.0.0.1:6379> FT.ALTER idx SCHEMA ADD id2 NUMERIC SORTABLE OK ``` See also -------- [`FT.CREATE`](../ft.create) Related topics -------------- * [RediSearch](https://redis.io/docs/stack/search) redis BLPOP BLPOP ===== ``` BLPOP ``` Syntax ``` BLPOP key [key ...] timeout ``` Available since: 2.0.0 Time complexity: O(N) where N is the number of provided keys. ACL categories: `@write`, `@list`, `@slow`, `@blocking`, `BLPOP` is a blocking list pop primitive. It is the blocking version of [`LPOP`](../lpop) because it blocks the connection when there are no elements to pop from any of the given lists. An element is popped from the head of the first list that is non-empty, with the given keys being checked in the order that they are given. Non-blocking behavior --------------------- When `BLPOP` is called, if at least one of the specified keys contains a non-empty list, an element is popped from the head of the list and returned to the caller together with the `key` it was popped from. Keys are checked in the order that they are given. Let's say that the key `list1` doesn't exist and `list2` and `list3` hold non-empty lists. Consider the following command: ``` BLPOP list1 list2 list3 0 ``` `BLPOP` guarantees to return an element from the list stored at `list2` (since it is the first non empty list when checking `list1`, `list2` and `list3` in that order). Blocking behavior ----------------- If none of the specified keys exist, `BLPOP` blocks the connection until another client performs an [`LPUSH`](../lpush) or [`RPUSH`](../rpush) operation against one of the keys. Once new data is present on one of the lists, the client returns with the name of the key unblocking it and the popped value. When `BLPOP` causes a client to block and a non-zero timeout is specified, the client will unblock returning a `nil` multi-bulk value when the specified timeout has expired without a push operation against at least one of the specified keys. **The timeout argument is interpreted as a double value specifying the maximum number of seconds to block**. A timeout of zero can be used to block indefinitely. What key is served first? What client? What element? Priority ordering details. ------------------------------------------------------------------------------- * If the client tries to blocks for multiple keys, but at least one key contains elements, the returned key / element pair is the first key from left to right that has one or more elements. In this case the client is not blocked. So for instance `BLPOP key1 key2 key3 key4 0`, assuming that both `key2` and `key4` are non-empty, will always return an element from `key2`. * If multiple clients are blocked for the same key, the first client to be served is the one that was waiting for more time (the first that blocked for the key). Once a client is unblocked it does not retain any priority, when it blocks again with the next call to `BLPOP` it will be served accordingly to the number of clients already blocked for the same key, that will all be served before it (from the first to the last that blocked). * When a client is blocking for multiple keys at the same time, and elements are available at the same time in multiple keys (because of a transaction or a Lua script added elements to multiple lists), the client will be unblocked using the first key that received a push operation (assuming it has enough elements to serve our client, as there may be other clients as well waiting for this key). Basically after the execution of every command Redis will run a list of all the keys that received data AND that have at least a client blocked. The list is ordered by new element arrival time, from the first key that received data to the last. For every key processed, Redis will serve all the clients waiting for that key in a FIFO fashion, as long as there are elements in this key. When the key is empty or there are no longer clients waiting for this key, the next key that received new data in the previous command / transaction / script is processed, and so forth. Behavior of `BLPOP` when multiple elements are pushed inside a list. -------------------------------------------------------------------- There are times when a list can receive multiple elements in the context of the same conceptual command: * Variadic push operations such as `LPUSH mylist a b c`. * After an [`EXEC`](../exec) of a [`MULTI`](../multi) block with multiple push operations against the same list. * Executing a Lua Script with Redis 2.6 or newer. When multiple elements are pushed inside a list where there are clients blocking, the behavior is different for Redis 2.4 and Redis 2.6 or newer. For Redis 2.6 what happens is that the command performing multiple pushes is executed, and *only after* the execution of the command the blocked clients are served. Consider this sequence of commands. ``` Client A: BLPOP foo 0 Client B: LPUSH foo a b c ``` If the above condition happens using a Redis 2.6 server or greater, Client **A** will be served with the `c` element, because after the [`LPUSH`](../lpush) command the list contains `c,b,a`, so taking an element from the left means to return `c`. Instead Redis 2.4 works in a different way: clients are served *in the context* of the push operation, so as long as `LPUSH foo a b c` starts pushing the first element to the list, it will be delivered to the Client **A**, that will receive `a` (the first element pushed). The behavior of Redis 2.4 creates a lot of problems when replicating or persisting data into the AOF file, so the much more generic and semantically simpler behavior was introduced into Redis 2.6 to prevent problems. Note that for the same reason a Lua script or a `MULTI/EXEC` block may push elements into a list and afterward **delete the list**. In this case the blocked clients will not be served at all and will continue to be blocked as long as no data is present on the list after the execution of a single command, transaction, or script. `BLPOP` inside a `MULTI` / `EXEC` transaction ---------------------------------------------- `BLPOP` can be used with pipelining (sending multiple commands and reading the replies in batch), however this setup makes sense almost solely when it is the last command of the pipeline. Using `BLPOP` inside a [`MULTI`](../multi) / [`EXEC`](../exec) block does not make a lot of sense as it would require blocking the entire server in order to execute the block atomically, which in turn does not allow other clients to perform a push operation. For this reason the behavior of `BLPOP` inside [`MULTI`](../multi) / [`EXEC`](../exec) when the list is empty is to return a `nil` multi-bulk reply, which is the same thing that happens when the timeout is reached. If you like science fiction, think of time flowing at infinite speed inside a [`MULTI`](../multi) / [`EXEC`](../exec) block... Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): specifically: * A `nil` multi-bulk when no element could be popped and the timeout expired. * A two-element multi-bulk with the first element being the name of the key where an element was popped and the second element being the value of the popped element. Examples -------- ``` redis> DEL list1 list2 (integer) 0 redis> RPUSH list1 a b c (integer) 3 redis> BLPOP list1 list2 0 1) "list1" 2) "a" ``` Reliable queues --------------- When `BLPOP` returns an element to the client, it also removes the element from the list. This means that the element only exists in the context of the client: if the client crashes while processing the returned element, it is lost forever. This can be a problem with some application where we want a more reliable messaging system. When this is the case, please check the [`BRPOPLPUSH`](../brpoplpush) command, that is a variant of `BLPOP` that adds the returned element to a target list before returning it to the client. Pattern: Event notification --------------------------- Using blocking list operations it is possible to mount different blocking primitives. For instance for some application you may need to block waiting for elements into a Redis Set, so that as far as a new element is added to the Set, it is possible to retrieve it without resort to polling. This would require a blocking version of [`SPOP`](../spop) that is not available, but using blocking list operations we can easily accomplish this task. The consumer will do: ``` LOOP forever WHILE SPOP(key) returns elements ... process elements ... END BRPOP helper_key END ``` While in the producer side we'll use simply: ``` MULTI SADD key element LPUSH helper_key x EXEC ``` History ------- * Starting with Redis version 6.0.0: `timeout` is interpreted as a double instead of an integer.
programming_docs
redis FT.DICTADD FT.DICTADD ========== ``` FT.DICTADD ``` Syntax ``` FT.DICTADD dict term [term ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.4.0](https://redis.io/docs/stack/search) Time complexity: O(1) Add terms to a dictionary [Examples](#examples) Required arguments ------------------ `dict` is dictionary name. `term` term to add to the dictionary. Return ------ FT.DICTADD returns an integer reply, the number of new terms that were added. Examples -------- **Add terms to a dictionary** ``` 127.0.0.1:6379> FT.DICTADD dict foo bar "hello world" (integer) 3 ``` See also -------- [`FT.DICTDEL`](../ft.dictdel) | [`FT.DICTDUMP`](../ft.dictdump) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) redis INCR INCR ==== ``` INCR ``` Syntax ``` INCR key ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@write`, `@string`, `@fast`, Increments the number stored at `key` by one. If the key does not exist, it is set to `0` before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer. This operation is limited to 64 bit signed integers. **Note**: this is a string operation because Redis does not have a dedicated integer type. The string stored at the key is interpreted as a base-10 **64 bit signed integer** to execute the operation. Redis stores integers in their integer representation, so for string values that actually hold an integer, there is no overhead for storing the string representation of the integer. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the value of `key` after the increment Examples -------- ``` SET mykey "10" INCR mykey GET mykey ``` Pattern: Counter ---------------- The counter pattern is the most obvious thing you can do with Redis atomic increment operations. The idea is simply send an `INCR` command to Redis every time an operation occurs. For instance in a web application we may want to know how many page views this user did every day of the year. To do so the web application may simply increment a key every time the user performs a page view, creating the key name concatenating the User ID and a string representing the current date. This simple pattern can be extended in many ways: * It is possible to use `INCR` and [`EXPIRE`](../expire) together at every page view to have a counter counting only the latest N page views separated by less than the specified amount of seconds. * A client may use GETSET in order to atomically get the current counter value and reset it to zero. * Using other atomic increment/decrement commands like [`DECR`](../decr) or [`INCRBY`](../incrby) it is possible to handle values that may get bigger or smaller depending on the operations performed by the user. Imagine for instance the score of different users in an online game. Pattern: Rate limiter --------------------- The rate limiter pattern is a special counter that is used to limit the rate at which an operation can be performed. The classical materialization of this pattern involves limiting the number of requests that can be performed against a public API. We provide two implementations of this pattern using `INCR`, where we assume that the problem to solve is limiting the number of API calls to a maximum of *ten requests per second per IP address*. Pattern: Rate limiter 1 ----------------------- The more simple and direct implementation of this pattern is the following: ``` FUNCTION LIMIT_API_CALL(ip) ts = CURRENT_UNIX_TIME() keyname = ip+":"+ts MULTI INCR(keyname) EXPIRE(keyname,10) EXEC current = RESPONSE_OF_INCR_WITHIN_MULTI IF current > 10 THEN ERROR "too many requests per second" ELSE PERFORM_API_CALL() END ``` Basically we have a counter for every IP, for every different second. But this counters are always incremented setting an expire of 10 seconds so that they'll be removed by Redis automatically when the current second is a different one. Note the used of [`MULTI`](../multi) and [`EXEC`](../exec) in order to make sure that we'll both increment and set the expire at every API call. Pattern: Rate limiter 2 ----------------------- An alternative implementation uses a single counter, but is a bit more complex to get it right without race conditions. We'll examine different variants. ``` FUNCTION LIMIT_API_CALL(ip): current = GET(ip) IF current != NULL AND current > 10 THEN ERROR "too many requests per second" ELSE value = INCR(ip) IF value == 1 THEN EXPIRE(ip,1) END PERFORM_API_CALL() END ``` The counter is created in a way that it only will survive one second, starting from the first request performed in the current second. If there are more than 10 requests in the same second the counter will reach a value greater than 10, otherwise it will expire and start again from 0. **In the above code there is a race condition**. If for some reason the client performs the `INCR` command but does not perform the [`EXPIRE`](../expire) the key will be leaked until we'll see the same IP address again. This can be fixed easily turning the `INCR` with optional [`EXPIRE`](../expire) into a Lua script that is send using the [`EVAL`](../eval) command (only available since Redis version 2.6). ``` local current current = redis.call("incr",KEYS[1]) if current == 1 then redis.call("expire",KEYS[1],1) end ``` There is a different way to fix this issue without using scripting, by using Redis lists instead of counters. The implementation is more complex and uses more advanced features but has the advantage of remembering the IP addresses of the clients currently performing an API call, that may be useful or not depending on the application. ``` FUNCTION LIMIT_API_CALL(ip) current = LLEN(ip) IF current > 10 THEN ERROR "too many requests per second" ELSE IF EXISTS(ip) == FALSE MULTI RPUSH(ip,ip) EXPIRE(ip,1) EXEC ELSE RPUSHX(ip,ip) END PERFORM_API_CALL() END ``` The [`RPUSHX`](../rpushx) command only pushes the element if the key already exists. Note that we have a race here, but it is not a problem: [`EXISTS`](../exists) may return false but the key may be created by another client before we create it inside the [`MULTI`](../multi) / [`EXEC`](../exec) block. However this race will just miss an API call under rare conditions, so the rate limiting will still work correctly. redis CF.DEL CF.DEL ====== ``` CF.DEL ``` Syntax ``` CF.DEL key item ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(k), where k is the number of sub-filters Deletes an item once from the filter. If the item exists only once, it will be removed from the filter. If the item was added multiple times, it will still be present. Warning Deleting elements that are not in the filter may delete a different item, resulting in false negatives. ### Parameters * **key**: The name of the filter * **item**: The item to delete from the filter ### Complexity O(n), where n is the number of `sub-filters`. Both alternative locations are checked on all `sub-filters`. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - where "1" means the item has been deleted from the filter, and "0" mean, the item was not found. Examples -------- ``` redis> CF.DEL cf item1 (integer) 1 ``` ``` redis> CF.DEL cf item_new (integer) 0 ``` ``` redis> CF.DEL cf1 item_new (error) Not found ``` redis XCLAIM XCLAIM ====== ``` XCLAIM ``` Syntax ``` XCLAIM key group consumer min-idle-time id [id ...] [IDLE ms] [TIME unix-time-milliseconds] [RETRYCOUNT count] [FORCE] [JUSTID] [LASTID lastid] ``` Available since: 5.0.0 Time complexity: O(log N) with N being the number of messages in the PEL of the consumer group. ACL categories: `@write`, `@stream`, `@fast`, In the context of a stream consumer group, this command changes the ownership of a pending message, so that the new owner is the consumer specified as the command argument. Normally this is what happens: 1. There is a stream with an associated consumer group. 2. Some consumer A reads a message via [`XREADGROUP`](../xreadgroup) from a stream, in the context of that consumer group. 3. As a side effect a pending message entry is created in the Pending Entries List (PEL) of the consumer group: it means the message was delivered to a given consumer, but it was not yet acknowledged via [`XACK`](../xack). 4. Then suddenly that consumer fails forever. 5. Other consumers may inspect the list of pending messages, that are stale for quite some time, using the [`XPENDING`](../xpending) command. In order to continue processing such messages, they use `XCLAIM` to acquire the ownership of the message and continue. Consumers can also use the [`XAUTOCLAIM`](../xautoclaim) command to automatically scan and claim stale pending messages. This dynamic is clearly explained in the [Stream intro documentation](https://redis.io/topics/streams-intro). Note that the message is claimed only if its idle time is greater than the minimum idle time we specify when calling `XCLAIM`. Because as a side effect `XCLAIM` will also reset the idle time (since this is a new attempt at processing the message), two consumers trying to claim a message at the same time will never both succeed: only one will successfully claim the message. This avoids that we process a given message multiple times in a trivial way (yet multiple processing is possible and unavoidable in the general case). Moreover, as a side effect, `XCLAIM` will increment the count of attempted deliveries of the message unless the `JUSTID` option has been specified (which only delivers the message ID, not the message itself). In this way messages that cannot be processed for some reason, for instance because the consumers crash attempting to process them, will start to have a larger counter and can be detected inside the system. `XCLAIM` will not claim a message in the following cases: 1. The message doesn't exist in the group PEL (i.e. it was never read by any consumer) 2. The message exists in the group PEL but not in the stream itself (i.e. the message was read but never acknowledged, and then was deleted from the stream, either by trimming or by [`XDEL`](../xdel)) In both cases the reply will not contain a corresponding entry to that message (i.e. the length of the reply array may be smaller than the number of IDs provided to `XCLAIM`). In the latter case, the message will also be deleted from the PEL in which it was found. This feature was introduced in Redis 7.0. Command options --------------- The command has multiple options, however most are mainly for internal use in order to transfer the effects of `XCLAIM` or other commands to the AOF file and to propagate the same effects to the replicas, and are unlikely to be useful to normal users: 1. `IDLE <ms>`: Set the idle time (last time it was delivered) of the message. If IDLE is not specified, an IDLE of 0 is assumed, that is, the time count is reset because the message has now a new owner trying to process it. 2. `TIME <ms-unix-time>`: This is the same as IDLE but instead of a relative amount of milliseconds, it sets the idle time to a specific Unix time (in milliseconds). This is useful in order to rewrite the AOF file generating `XCLAIM` commands. 3. `RETRYCOUNT <count>`: Set the retry counter to the specified value. This counter is incremented every time a message is delivered again. Normally `XCLAIM` does not alter this counter, which is just served to clients when the XPENDING command is called: this way clients can detect anomalies, like messages that are never processed for some reason after a big number of delivery attempts. 4. `FORCE`: Creates the pending message entry in the PEL even if certain specified IDs are not already in the PEL assigned to a different client. However the message must be exist in the stream, otherwise the IDs of non existing messages are ignored. 5. `JUSTID`: Return just an array of IDs of messages successfully claimed, without returning the actual message. Using this option means the retry counter is not incremented. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays), specifically: The command returns all the messages successfully claimed, in the same format as [`XRANGE`](../xrange). However if the `JUSTID` option was specified, only the message IDs are reported, without including the actual message. Examples -------- ``` > XCLAIM mystream mygroup Alice 3600000 1526569498055-0 1) 1) 1526569498055-0 2) 1) "message" 2) "orange" ``` In the above example we claim the message with ID `1526569498055-0`, only if the message is idle for at least one hour without the original consumer or some other consumer making progresses (acknowledging or claiming it), and assigns the ownership to the consumer `Alice`. redis CLUSTER CLUSTER ======= ``` CLUSTER LINKS ``` Syntax ``` CLUSTER LINKS ``` Available since: 7.0.0 Time complexity: O(N) where N is the total number of Cluster nodes ACL categories: `@slow`, Each node in a Redis Cluster maintains a pair of long-lived TCP link with each peer in the cluster: One for sending outbound messages towards the peer and one for receiving inbound messages from the peer. `CLUSTER LINKS` outputs information of all such peer links as an array, where each array element is a map that contains attributes and their values for an individual link. Examples -------- The following is an example output: ``` > CLUSTER LINKS 1) 1) "direction" 2) "to" 3) "node" 4) "8149d745fa551e40764fecaf7cab9dbdf6b659ae" 5) "create-time" 6) (integer) 1639442739375 7) "events" 8) "rw" 9) "send-buffer-allocated" 10) (integer) 4512 11) "send-buffer-used" 12) (integer) 0 2) 1) "direction" 2) "from" 3) "node" 4) "8149d745fa551e40764fecaf7cab9dbdf6b659ae" 5) "create-time" 6) (integer) 1639442739411 7) "events" 8) "r" 9) "send-buffer-allocated" 10) (integer) 0 11) "send-buffer-used" 12) (integer) 0 ``` Each map is composed of the following attributes of the corresponding cluster link and their values: 1. `direction`: This link is established by the local node `to` the peer, or accepted by the local node `from` the peer. 2. `node`: The node id of the peer. 3. `create-time`: Creation time of the link. (In the case of a `to` link, this is the time when the TCP link is created by the local node, not the time when it is actually established.) 4. `events`: Events currently registered for the link. `r` means readable event, `w` means writable event. 5. `send-buffer-allocated`: Allocated size of the link's send buffer, which is used to buffer outgoing messages toward the peer. 6. `send-buffer-used`: Size of the portion of the link's send buffer that is currently holding data(messages). Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): An array of maps where each map contains various attributes and their values of a cluster link. redis LATENCY LATENCY ======= ``` LATENCY DOCTOR ``` Syntax ``` LATENCY DOCTOR ``` Available since: 2.8.13 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, The `LATENCY DOCTOR` command reports about different latency-related issues and advises about possible remedies. This command is the most powerful analysis tool in the latency monitoring framework, and is able to provide additional statistical data like the average period between latency spikes, the median deviation, and a human-readable analysis of the event. For certain events, like `fork`, additional information is provided, like the rate at which the system forks processes. This is the output you should post in the Redis mailing list if you are looking for help about Latency related issues. Examples -------- ``` 127.0.0.1:6379> latency doctor Dave, I have observed latency spikes in this Redis instance. You don't mind talking about it, do you Dave? 1. command: 5 latency spikes (average 300ms, mean deviation 120ms, period 73.40 sec). Worst all time event 500ms. I have a few advices for you: - Your current Slow Log configuration only logs events that are slower than your configured latency monitor threshold. Please use 'CONFIG SET slowlog-log-slower-than 1000'. - Check your Slow Log to understand what are the commands you are running which are too slow to execute. Please check http://redis.io/commands/slowlog for more information. - Deleting, expiring or evicting (because of maxmemory policy) large objects is a blocking operation. If you have very large objects that are often deleted, expired, or evicted, try to fragment those objects into multiple smaller objects. ``` **Note:** the doctor has erratic psychological behaviors, so we recommend interacting with it carefully. For more information refer to the [Latency Monitoring Framework page](https://redis.io/topics/latency-monitor). Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings) redis ACL ACL === ``` ACL SAVE ``` Syntax ``` ACL SAVE ``` Available since: 6.0.0 Time complexity: O(N). Where N is the number of configured users. ACL categories: `@admin`, `@slow`, `@dangerous`, When Redis is configured to use an ACL file (with the `aclfile` configuration option), this command will save the currently defined ACLs from the server memory to the ACL file. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` on success. The command may fail with an error for several reasons: if the file cannot be written or if the server is not configured to use an external ACL file. Examples -------- ``` > ACL SAVE +OK > ACL SAVE -ERR There was an error trying to save the ACLs. Please check the server logs for more information ``` redis LREM LREM ==== ``` LREM ``` Syntax ``` LREM key count element ``` Available since: 1.0.0 Time complexity: O(N+M) where N is the length of the list and M is the number of elements removed. ACL categories: `@write`, `@list`, `@slow`, Removes the first `count` occurrences of elements equal to `element` from the list stored at `key`. The `count` argument influences the operation in the following ways: * `count > 0`: Remove elements equal to `element` moving from head to tail. * `count < 0`: Remove elements equal to `element` moving from tail to head. * `count = 0`: Remove all elements equal to `element`. For example, `LREM list -2 "hello"` will remove the last two occurrences of `"hello"` in the list stored at `list`. Note that non-existing keys are treated like empty lists, so when `key` does not exist, the command will always return `0`. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of removed elements. Examples -------- ``` RPUSH mylist "hello" RPUSH mylist "hello" RPUSH mylist "foo" RPUSH mylist "hello" LREM mylist -2 "hello" LRANGE mylist 0 -1 ``` redis ZDIFF ZDIFF ===== ``` ZDIFF ``` Syntax ``` ZDIFF numkeys key [key ...] [WITHSCORES] ``` Available since: 6.2.0 Time complexity: O(L + (N-K)log(N)) worst case where L is the total number of elements in all the sets, N is the size of the first set, and K is the size of the result set. ACL categories: `@read`, `@sortedset`, `@slow`, This command is similar to [`ZDIFFSTORE`](../zdiffstore), but instead of storing the resulting sorted set, it is returned to the client. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): the result of the difference (optionally with their scores, in case the `WITHSCORES` option is given). Examples -------- ``` ZADD zset1 1 "one" ZADD zset1 2 "two" ZADD zset1 3 "three" ZADD zset2 1 "one" ZADD zset2 2 "two" ZDIFF 2 zset1 zset2 ZDIFF 2 zset1 zset2 WITHSCORES ```
programming_docs
redis REPLICAOF REPLICAOF ========= ``` REPLICAOF ``` Syntax ``` REPLICAOF host port ``` Available since: 5.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, The `REPLICAOF` command can change the replication settings of a replica on the fly. If a Redis server is already acting as replica, the command `REPLICAOF` NO ONE will turn off the replication, turning the Redis server into a MASTER. In the proper form `REPLICAOF` hostname port will make the server a replica of another server listening at the specified hostname and port. If a server is already a replica of some master, `REPLICAOF` hostname port will stop the replication against the old server and start the synchronization against the new one, discarding the old dataset. The form `REPLICAOF` NO ONE will stop replication, turning the server into a MASTER, but will not discard the replication. So, if the old master stops working, it is possible to turn the replica into a master and set the application to use this new master in read/write. Later when the other Redis server is fixed, it can be reconfigured to work as a replica. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) Examples -------- ``` > REPLICAOF NO ONE "OK" > REPLICAOF 127.0.0.1 6799 "OK" ``` redis INCRBY INCRBY ====== ``` INCRBY ``` Syntax ``` INCRBY key increment ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@write`, `@string`, `@fast`, Increments the number stored at `key` by `increment`. If the key does not exist, it is set to `0` before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer. This operation is limited to 64 bit signed integers. See [`INCR`](../incr) for extra information on increment/decrement operations. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the value of `key` after the increment Examples -------- ``` SET mykey "10" INCRBY mykey 5 ``` redis ZRANGEBYLEX ZRANGEBYLEX =========== ``` ZRANGEBYLEX (deprecated) ``` As of Redis version 6.2.0, this command is regarded as deprecated. It can be replaced by [`ZRANGE`](../zrange) with the `BYLEX` argument when migrating or writing new code. Syntax ``` ZRANGEBYLEX key min max [LIMIT offset count] ``` Available since: 2.8.9 Time complexity: O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)). ACL categories: `@read`, `@sortedset`, `@slow`, When all the elements in a sorted set are inserted with the same score, in order to force lexicographical ordering, this command returns all the elements in the sorted set at `key` with a value between `min` and `max`. If the elements in the sorted set have different scores, the returned elements are unspecified. The elements are considered to be ordered from lower to higher strings as compared byte-by-byte using the `memcmp()` C function. Longer strings are considered greater than shorter strings if the common part is identical. The optional `LIMIT` argument can be used to only get a range of the matching elements (similar to *SELECT LIMIT offset, count* in SQL). A negative `count` returns all elements from the `offset`. Keep in mind that if `offset` is large, the sorted set needs to be traversed for `offset` elements before getting to the elements to return, which can add up to O(N) time complexity. How to specify intervals ------------------------ Valid *start* and *stop* must start with `(` or `[`, in order to specify if the range item is respectively exclusive or inclusive. The special values of `+` or `-` for *start* and *stop* have the special meaning or positively infinite and negatively infinite strings, so for instance the command **ZRANGEBYLEX myzset - +** is guaranteed to return all the elements in the sorted set, if all the elements have the same score. Details on strings comparison ----------------------------- Strings are compared as binary array of bytes. Because of how the ASCII character set is specified, this means that usually this also have the effect of comparing normal ASCII characters in an obvious dictionary way. However this is not true if non plain ASCII strings are used (for example utf8 strings). However the user can apply a transformation to the encoded string so that the first part of the element inserted in the sorted set will compare as the user requires for the specific application. For example if I want to add strings that will be compared in a case-insensitive way, but I still want to retrieve the real case when querying, I can add strings in the following way: ``` ZADD autocomplete 0 foo:Foo 0 bar:BAR 0 zap:zap ``` Because of the first *normalized* part in every element (before the colon character), we are forcing a given comparison, however after the range is queries using `ZRANGEBYLEX` the application can display to the user the second part of the string, after the colon. The binary nature of the comparison allows to use sorted sets as a general purpose index, for example the first part of the element can be a 64 bit big endian number: since big endian numbers have the most significant bytes in the initial positions, the binary comparison will match the numerical comparison of the numbers. This can be used in order to implement range queries on 64 bit values. As in the example below, after the first 8 bytes we can store the value of the element we are actually indexing. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of elements in the specified score range. Examples -------- ``` ZADD myzset 0 a 0 b 0 c 0 d 0 e 0 f 0 g ZRANGEBYLEX myzset - [c ZRANGEBYLEX myzset - (c ZRANGEBYLEX myzset [aaa (g ``` redis BF.ADD BF.ADD ====== ``` BF.ADD ``` Syntax ``` BF.ADD key item ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(k), where k is the number of hash functions used by the last sub-filter Creates an empty Bloom Filter with a single sub-filter for the initial capacity requested and with an upper bound `error_rate`. By default, the filter auto-scales by creating additional sub-filters when `capacity` is reached. The new sub-filter is created with size of the previous sub-filter multiplied by `expansion`. Though the filter can scale up by creating sub-filters, it is recommended to reserve the estimated required `capacity` since maintaining and querying sub-filters requires additional memory (each sub-filter uses an extra bits and hash function) and consume further CPU time than an equivalent filter that had the right capacity at creation time. The number of hash functions is `-log(error)/ln(2)^2`. The number of bits per item is `-log(error)/ln(2)` ≈ 1.44. * **1%** error rate requires 7 hash functions and 10.08 bits per item. * **0.1%** error rate requires 10 hash functions and 14.4 bits per item. * **0.01%** error rate requires 14 hash functions and 20.16 bits per item. ### Parameters: * **key**: The key under which the filter is found * **error\_rate**: The desired probability for false positives. The rate is a decimal value between 0 and 1. For example, for a desired false positive rate of 0.1% (1 in 1000), error\_rate should be set to 0.001. * **capacity**: The number of entries intended to be added to the filter. If your filter allows scaling, performance will begin to degrade after adding more items than this number. The actual degradation depends on how far the limit has been exceeded. Performance degrades linearly with the number of `sub-filters`. Optional parameters: * **NONSCALING**: Prevents the filter from creating additional sub-filters if initial capacity is reached. Non-scaling filters requires slightly less memory than their scaling counterparts. The filter returns an error when `capacity` is reached. * **EXPANSION**: When `capacity` is reached, an additional sub-filter is created. The size of the new sub-filter is the size of the last sub-filter multiplied by `expansion`. If the number of elements to be stored in the filter is unknown, we recommend that you use an `expansion` of 2 or more to reduce the number of sub-filters. Otherwise, we recommend that you use an `expansion` of 1 to reduce memory consumption. The default expansion value is 2. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - "1" if such item did not exist in the filter, "0" if such item was likely added to the filter before (false positives are possible). Examples -------- ``` redis> BF.ADD bf item1 (integer) 0 redis> BF.ADD bf item_new (integer) 1 ``` redis FT.DROPINDEX FT.DROPINDEX ============ ``` FT.DROPINDEX ``` Syntax ``` FT.DROPINDEX index [DD] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 2.0.0](https://redis.io/docs/stack/search) Time complexity: O(1) or O(N) if documents are deleted, where N is the number of keys in the keyspace Delete an index [Examples](#examples) Required arguments ------------------ `index` is full-text index name. You must first create the index using [`FT.CREATE`](../ft.create). Optional arguments ------------------ `DD` drop operation that, if set, deletes the actual document hashes. By default, FT.DROPINDEX does not delete the documents associated with the index. Adding the `DD` option deletes the documents as well. If an index creation is still running ([`FT.CREATE`](../ft.create) is running asynchronously), only the document hashes that have already been indexed are deleted. The document hashes left to be indexed remain in the database. To check the completion of the indexing, use [`FT.INFO`](../ft.info). Return ------ FT.DROPINDEX returns a simple string reply `OK` if executed correctly, or an error reply otherwise. Examples -------- **Delete an index** ``` 127.0.0.1:6379> FT.DROPINDEX idx DD OK ``` See also -------- [`FT.CREATE`](../ft.create) | [`FT.INFO`](../ft.info) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) redis FUNCTION FUNCTION ======== ``` FUNCTION KILL ``` Syntax ``` FUNCTION KILL ``` Available since: 7.0.0 Time complexity: O(1) ACL categories: `@slow`, `@scripting`, Kill a function that is currently executing. The `FUNCTION KILL` command can be used only on functions that did not modify the dataset during their execution (since stopping a read-only function does not violate the scripting engine's guaranteed atomicity). For more information please refer to [Introduction to Redis Functions](https://redis.io/topics/functions-intro). Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) redis WAIT WAIT ==== ``` WAIT ``` Syntax ``` WAIT numreplicas timeout ``` Available since: 3.0.0 Time complexity: O(1) ACL categories: `@slow`, `@connection`, This command blocks the current client until all the previous write commands are successfully transferred and acknowledged by at least the specified number of replicas. If the timeout, specified in milliseconds, is reached, the command returns even if the specified number of replicas were not yet reached. The command **will always return** the number of replicas that acknowledged the write commands sent by the current client before the `WAIT` command, both in the case where the specified number of replicas are reached, or when the timeout is reached. A few remarks: 1. When `WAIT` returns, all the previous write commands sent in the context of the current connection are guaranteed to be received by the number of replicas returned by `WAIT`. 2. If the command is sent as part of a [`MULTI`](../multi) transaction (since Redis 7.0, any context that does not allow blocking, such as inside scripts), the command does not block but instead just return ASAP the number of replicas that acknowledged the previous write commands. 3. A timeout of 0 means to block forever. 4. Since `WAIT` returns the number of replicas reached both in case of failure and success, the client should check that the returned value is equal or greater to the replication level it demanded. Consistency and WAIT -------------------- Note that `WAIT` does not make Redis a strongly consistent store: while synchronous replication is part of a replicated state machine, it is not the only thing needed. However in the context of Sentinel or Redis Cluster failover, `WAIT` improves the real world data safety. Specifically if a given write is transferred to one or more replicas, it is more likely (but not guaranteed) that if the master fails, we'll be able to promote, during a failover, a replica that received the write: both Sentinel and Redis Cluster will do a best-effort attempt to promote the best replica among the set of available replicas. However this is just a best-effort attempt so it is possible to still lose a write synchronously replicated to multiple replicas. Implementation details ---------------------- Since the introduction of partial resynchronization with replicas (PSYNC feature) Redis replicas asynchronously ping their master with the offset they already processed in the replication stream. This is used in multiple ways: 1. Detect timed out replicas. 2. Perform a partial resynchronization after a disconnection. 3. Implement `WAIT`. In the specific case of the implementation of `WAIT`, Redis remembers, for each client, the replication offset of the produced replication stream when a given write command was executed in the context of a given client. When `WAIT` is called Redis checks if the specified number of replicas already acknowledged this offset or a greater one. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): The command returns the number of replicas reached by all the writes performed in the context of the current connection. Examples -------- ``` > SET foo bar OK > WAIT 1 0 (integer) 1 > WAIT 2 1000 (integer) 1 ``` In the following example the first call to `WAIT` does not use a timeout and asks for the write to reach 1 replica. It returns with success. In the second attempt instead we put a timeout, and ask for the replication of the write to two replicas. Since there is a single replica available, after one second `WAIT` unblocks and returns 1, the number of replicas reached. redis CLUSTER CLUSTER ======= ``` CLUSTER COUNTKEYSINSLOT ``` Syntax ``` CLUSTER COUNTKEYSINSLOT slot ``` Available since: 3.0.0 Time complexity: O(1) ACL categories: `@slow`, Returns the number of keys in the specified Redis Cluster hash slot. The command only queries the local data set, so contacting a node that is not serving the specified hash slot will always result in a count of zero being returned. ``` > CLUSTER COUNTKEYSINSLOT 7000 (integer) 50341 ``` Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): The number of keys in the specified hash slot, or an error if the hash slot is invalid. redis CMS.INITBYDIM CMS.INITBYDIM ============= ``` CMS.INITBYDIM ``` Syntax ``` CMS.INITBYDIM key width depth ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(1) Initializes a Count-Min Sketch to dimensions specified by user. ### Parameters: * **key**: The name of the sketch. * **width**: Number of counters in each array. Reduces the error size. * **depth**: Number of counter-arrays. Reduces the probability for an error of a certain size (percentage of total count). Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) - `OK` if executed correctly, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) otherwise. Examples -------- ``` redis> CMS.INITBYDIM test 2000 5 OK ``` redis EVAL_RO EVAL\_RO ======== ``` EVAL_RO ``` Syntax ``` EVAL_RO script numkeys [key [key ...]] [arg [arg ...]] ``` Available since: 7.0.0 Time complexity: Depends on the script that is executed. ACL categories: `@slow`, `@scripting`, This is a read-only variant of the [`EVAL`](../eval) command that cannot execute commands that modify data. For more information about when to use this command vs [`EVAL`](../eval), please refer to [Read-only scripts](https://redis.io/docs/manual/programmability/#read-only_scripts). For more information about [`EVAL`](../eval) scripts please refer to [Introduction to Eval Scripts](https://redis.io/topics/eval-intro). Examples -------- ``` > SET mykey "Hello" OK > EVAL_RO "return redis.call('GET', KEYS[1])" 1 mykey "Hello" > EVAL_RO "return redis.call('DEL', KEYS[1])" 1 mykey (error) ERR Error running script (call to b0d697da25b13e49157b2c214a4033546aba2104): @user_script:1: @user_script: 1: Write commands are not allowed from read-only scripts. ``` redis BGREWRITEAOF BGREWRITEAOF ============ ``` BGREWRITEAOF ``` Syntax ``` BGREWRITEAOF ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, Instruct Redis to start an [Append Only File](https://redis.io/topics/persistence#append-only-file) rewrite process. The rewrite will create a small optimized version of the current Append Only File. If `BGREWRITEAOF` fails, no data gets lost as the old AOF will be untouched. The rewrite will be only triggered by Redis if there is not already a background process doing persistence. Specifically: * If a Redis child is creating a snapshot on disk, the AOF rewrite is *scheduled* but not started until the saving child producing the RDB file terminates. In this case the `BGREWRITEAOF` will still return a positive status reply, but with an appropriate message. You can check if an AOF rewrite is scheduled looking at the [`INFO`](../info) command as of Redis 2.6 or successive versions. * If an AOF rewrite is already in progress the command returns an error and no AOF rewrite will be scheduled for a later time. * If the AOF rewrite could start, but the attempt at starting it fails (for instance because of an error in creating the child process), an error is returned to the caller. Since Redis 2.4 the AOF rewrite is automatically triggered by Redis, however the `BGREWRITEAOF` command can be used to trigger a rewrite at any time. Please refer to the [persistence documentation](https://redis.io/topics/persistence) for detailed information. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): A simple string reply indicating that the rewriting started or is about to start ASAP, when the call is executed with success. The command may reply with an error in certain cases, as documented above. redis CLUSTER CLUSTER ======= ``` CLUSTER DELSLOTS ``` Syntax ``` CLUSTER DELSLOTS slot [slot ...] ``` Available since: 3.0.0 Time complexity: O(N) where N is the total number of hash slot arguments ACL categories: `@admin`, `@slow`, `@dangerous`, In Redis Cluster, each node keeps track of which master is serving a particular hash slot. The `CLUSTER DELSLOTS` command asks a particular Redis Cluster node to forget which master is serving the hash slots specified as arguments. In the context of a node that has received a `CLUSTER DELSLOTS` command and has consequently removed the associations for the passed hash slots, we say those hash slots are *unbound*. Note that the existence of unbound hash slots occurs naturally when a node has not been configured to handle them (something that can be done with the [`CLUSTER ADDSLOTS`](../cluster-addslots) command) and if it has not received any information about who owns those hash slots (something that it can learn from heartbeat or update messages). If a node with unbound hash slots receives a heartbeat packet from another node that claims to be the owner of some of those hash slots, the association is established instantly. Moreover, if a heartbeat or update message is received with a configuration epoch greater than the node's own, the association is re-established. However, note that: 1. The command only works if all the specified slots are already associated with some node. 2. The command fails if the same slot is specified multiple times. 3. As a side effect of the command execution, the node may go into *down* state because not all hash slots are covered. Example ------- The following command removes the association for slots 5000 and 5001 from the node receiving the command: ``` > CLUSTER DELSLOTS 5000 5001 OK ``` Usage in Redis Cluster ---------------------- This command only works in cluster mode and may be useful for debugging and in order to manually orchestrate a cluster configuration when a new cluster is created. It is currently not used by `redis-cli`, and mainly exists for API completeness. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if the command was successful. Otherwise an error is returned.
programming_docs
redis PING PING ==== ``` PING ``` Syntax ``` PING [message] ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@fast`, `@connection`, Returns `PONG` if no argument is provided, otherwise return a copy of the argument as a bulk. This command is useful for: 1. Testing whether a connection is still alive. 2. Verifying the server's ability to serve data - an error is returned when this isn't the case (e.g., during load from persistence or accessing a stale replica). 3. Measuring latency. If the client is subscribed to a channel or a pattern, it will instead return a multi-bulk with a "pong" in the first position and an empty bulk in the second position, unless an argument is provided in which case it returns a copy of the argument. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings), and specifically `PONG`, when no argument is provided. [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings) the argument provided, when applicable. Examples -------- ``` PING PING "hello world" ``` redis SMISMEMBER SMISMEMBER ========== ``` SMISMEMBER ``` Syntax ``` SMISMEMBER key member [member ...] ``` Available since: 6.2.0 Time complexity: O(N) where N is the number of elements being checked for membership ACL categories: `@read`, `@set`, `@fast`, Returns whether each `member` is a member of the set stored at `key`. For every `member`, `1` is returned if the value is a member of the set, or `0` if the element is not a member of the set or if `key` does not exist. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list representing the membership of the given elements, in the same order as they are requested. Examples -------- ``` SADD myset "one" SADD myset "one" SMISMEMBER myset "one" "notamember" ``` redis ROLE ROLE ==== ``` ROLE ``` Syntax ``` ROLE ``` Available since: 2.8.12 Time complexity: O(1) ACL categories: `@admin`, `@fast`, `@dangerous`, Provide information on the role of a Redis instance in the context of replication, by returning if the instance is currently a `master`, `slave`, or `sentinel`. The command also returns additional information about the state of the replication (if the role is master or slave) or the list of monitored master names (if the role is sentinel). Output format ------------- The command returns an array of elements. The first element is the role of the instance, as one of the following three strings: * "master" * "slave" * "sentinel" The additional elements of the array depends on the role. Master output ------------- An example of output when `ROLE` is called in a master instance: ``` 1) "master" 2) (integer) 3129659 3) 1) 1) "127.0.0.1" 2) "9001" 3) "3129242" 2) 1) "127.0.0.1" 2) "9002" 3) "3129543" ``` The master output is composed of the following parts: 1. The string `master`. 2. The current master replication offset, which is an offset that masters and replicas share to understand, in partial resynchronizations, the part of the replication stream the replicas needs to fetch to continue. 3. An array composed of three elements array representing the connected replicas. Every sub-array contains the replica IP, port, and the last acknowledged replication offset. Output of the command on replicas --------------------------------- An example of output when `ROLE` is called in a replica instance: ``` 1) "slave" 2) "127.0.0.1" 3) (integer) 9000 4) "connected" 5) (integer) 3167038 ``` The replica output is composed of the following parts: 1. The string `slave`, because of backward compatibility (see note at the end of this page). 2. The IP of the master. 3. The port number of the master. 4. The state of the replication from the point of view of the master, that can be `connect` (the instance needs to connect to its master), `connecting` (the master-replica connection is in progress), `sync` (the master and replica are trying to perform the synchronization), `connected` (the replica is online). 5. The amount of data received from the replica so far in terms of master replication offset. Sentinel output --------------- An example of Sentinel output: ``` 1) "sentinel" 2) 1) "resque-master" 2) "html-fragments-master" 3) "stats-master" 4) "metadata-master" ``` The sentinel output is composed of the following parts: 1. The string `sentinel`. 2. An array of master names monitored by this Sentinel instance. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): where the first element is one of `master`, `slave`, `sentinel` and the additional elements are role-specific as illustrated above. Examples -------- ``` ROLE ``` **A note about the word slave used in this man page**: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated. redis SUNION SUNION ====== ``` SUNION ``` Syntax ``` SUNION key [key ...] ``` Available since: 1.0.0 Time complexity: O(N) where N is the total number of elements in all given sets. ACL categories: `@read`, `@set`, `@slow`, Returns the members of the set resulting from the union of all the given sets. For example: ``` key1 = {a,b,c,d} key2 = {c} key3 = {a,c,e} SUNION key1 key2 key3 = {a,b,c,d,e} ``` Keys that do not exist are considered to be empty sets. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list with members of the resulting set. Examples -------- ``` SADD key1 "a" SADD key1 "b" SADD key1 "c" SADD key2 "c" SADD key2 "d" SADD key2 "e" SUNION key1 key2 ``` redis ZREMRANGEBYLEX ZREMRANGEBYLEX ============== ``` ZREMRANGEBYLEX ``` Syntax ``` ZREMRANGEBYLEX key min max ``` Available since: 2.8.9 Time complexity: O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements removed by the operation. ACL categories: `@write`, `@sortedset`, `@slow`, When all the elements in a sorted set are inserted with the same score, in order to force lexicographical ordering, this command removes all elements in the sorted set stored at `key` between the lexicographical range specified by `min` and `max`. The meaning of `min` and `max` are the same of the [`ZRANGEBYLEX`](../zrangebylex) command. Similarly, this command actually removes the same elements that [`ZRANGEBYLEX`](../zrangebylex) would return if called with the same `min` and `max` arguments. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of elements removed. Examples -------- ``` ZADD myzset 0 aaaa 0 b 0 c 0 d 0 e ZADD myzset 0 foo 0 zap 0 zip 0 ALPHA 0 alpha ZRANGE myzset 0 -1 ZREMRANGEBYLEX myzset [alpha [omega ZRANGE myzset 0 -1 ``` redis LLEN LLEN ==== ``` LLEN ``` Syntax ``` LLEN key ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@read`, `@list`, `@fast`, Returns the length of the list stored at `key`. If `key` does not exist, it is interpreted as an empty list and `0` is returned. An error is returned when the value stored at `key` is not a list. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the length of the list at `key`. Examples -------- ``` LPUSH mylist "World" LPUSH mylist "Hello" LLEN mylist ``` redis MONITOR MONITOR ======= ``` MONITOR ``` Syntax ``` MONITOR ``` Available since: 1.0.0 Time complexity: ACL categories: `@admin`, `@slow`, `@dangerous`, `MONITOR` is a debugging command that streams back every command processed by the Redis server. It can help in understanding what is happening to the database. This command can both be used via `redis-cli` and via `telnet`. The ability to see all the requests processed by the server is useful in order to spot bugs in an application both when using Redis as a database and as a distributed caching system. ``` $ redis-cli monitor 1339518083.107412 [0 127.0.0.1:60866] "keys" "*" 1339518087.877697 [0 127.0.0.1:60866] "dbsize" 1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6" 1339518096.506257 [0 127.0.0.1:60866] "get" "x" 1339518099.363765 [0 127.0.0.1:60866] "eval" "return redis.call('set','x','7')" "0" 1339518100.363799 [0 lua] "set" "x" "7" 1339518100.544926 [0 127.0.0.1:60866] "del" "x" ``` Use `SIGINT` (Ctrl-C) to stop a `MONITOR` stream running via `redis-cli`. ``` $ telnet localhost 6379 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. MONITOR +OK +1339518083.107412 [0 127.0.0.1:60866] "keys" "*" +1339518087.877697 [0 127.0.0.1:60866] "dbsize" +1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6" +1339518096.506257 [0 127.0.0.1:60866] "get" "x" +1339518099.363765 [0 127.0.0.1:60866] "del" "x" +1339518100.544926 [0 127.0.0.1:60866] "get" "x" QUIT +OK Connection closed by foreign host. ``` Manually issue the [`QUIT`](../quit) or [`RESET`](../reset) commands to stop a `MONITOR` stream running via `telnet`. Commands not logged by MONITOR ------------------------------ Because of security concerns, no administrative commands are logged by `MONITOR`'s output and sensitive data is redacted in the command [`AUTH`](../auth). Furthermore, the command [`QUIT`](../quit) is also not logged. Cost of running MONITOR ----------------------- Because `MONITOR` streams back **all** commands, its use comes at a cost. The following (totally unscientific) benchmark numbers illustrate what the cost of running `MONITOR` can be. Benchmark result **without** `MONITOR` running: ``` $ src/redis-benchmark -c 10 -n 100000 -q PING_INLINE: 101936.80 requests per second PING_BULK: 102880.66 requests per second SET: 95419.85 requests per second GET: 104275.29 requests per second INCR: 93283.58 requests per second ``` Benchmark result **with** `MONITOR` running (`redis-cli monitor > /dev/null`): ``` $ src/redis-benchmark -c 10 -n 100000 -q PING_INLINE: 58479.53 requests per second PING_BULK: 59136.61 requests per second SET: 41823.50 requests per second GET: 45330.91 requests per second INCR: 41771.09 requests per second ``` In this particular case, running a single `MONITOR` client can reduce the throughput by more than 50%. Running more `MONITOR` clients will reduce throughput even more. Return ------ **Non standard return value**, just dumps the received commands in an infinite flow. Behavior change history ----------------------- * `>= 6.0.0`: [`AUTH`](../auth) excluded from the command's output. * `>= 6.2.0`: "[`RESET`](../reset) can be called to exit monitor mode. * `>= 6.2.4`: "[`AUTH`](../auth), [`HELLO`](../hello), [`EVAL`](../eval), [`EVAL_RO`](../eval_ro), [`EVALSHA`](../evalsha) and [`EVALSHA_RO`](../evalsha_ro) included in the command's output. redis JSON.DEBUG JSON.DEBUG ========== ``` JSON.DEBUG ``` Syntax ``` JSON.DEBUG ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: N/A This is a container command for debugging related tasks. redis FT._LIST FT.\_LIST ========= ``` FT._LIST ``` Syntax ``` FT._LIST ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 2.0.0](https://redis.io/docs/stack/search) Time complexity: O(1) Returns a list of all existing indexes. Temporary command The prefix `_` in the command indicates, this is a temporary command. In the future, a [`SCAN`](../scan) type of command will be added, for use when a database contains a large number of indices. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) with index names. Examples -------- ``` FT.\_LIST1)"idx"2)"movies"3)"imdb" ``` redis JSON.GET JSON.GET ======== ``` JSON.GET ``` Syntax ``` JSON.GET key [INDENT indent] [NEWLINE newline] [SPACE space] [path [path ...]] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: O(N) when path is evaluated to a single value where N is the size of the value, O(N) when path is evaluated to multiple values, where N is the size of the key Return the value at `path` in JSON serialized form [Examples](#examples) Required arguments ------------------ `key` is key to parse. Optional arguments ------------------ `path` is JSONPath to specify. Default is root `$`. JSON.GET accepts multiple `path` arguments. Note When using a single JSONPath, the root of the matching values is a JSON string with a top-level **array** of serialized JSON value. In contrast, a legacy path returns a single value. When using multiple JSONPath arguments, the root of the matching values is a JSON string with a top-level **object**, with each object value being a top-level array of serialized JSON value. In contrast, if all paths are legacy paths, each object value is a single serialized JSON value. If there are multiple paths that include both legacy path and JSONPath, the returned value conforms to the JSONPath version (an array of values). `INDENT` sets the indentation string for nested levels. `NEWLINE` sets the string that's printed at the end of each line. `SPACE` sets the string that's put between a key and a value. Note Produce pretty-formatted JSON with `redis-cli` by following this example: ``` ~/$ redis-cli --raw 127.0.0.1:6379> JSON.GET myjsonkey INDENT "\t" NEWLINE "\n" SPACE " " path.to.value[1] ``` Return ------ JSON.GET returns a bulk string representing a JSON array of string replies. Each string is the JSON serialization of each JSON value that matches a path. Using multiple paths, JSON.GET returns a bulk string representing a JSON object with string values. Each string value is an array of the JSON serialization of each JSON value that matches a path. For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Examples -------- **Return the value at `path` in JSON serialized form** Create a JSON document. ``` 127.0.0.1:6379> JSON.SET doc $ '{"a":2, "b": 3, "nested": {"a": 4, "b": null}}' OK ``` With a single JSONPath (JSON array bulk string): ``` 127.0.0.1:6379> JSON.GET doc $..b "[3,null]" ``` Using multiple paths with at least one JSONPath returns a JSON string with a top-level object with an array of JSON values per path: ``` 127.0.0.1:6379> JSON.GET doc ..a $..b "{\"$..b\":[3,null],\"..a\":[2,4]}" ``` See also -------- [`JSON.SET`](../json.set) | [`JSON.MGET`](../json.mget) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json) redis LATENCY LATENCY ======= ``` LATENCY RESET ``` Syntax ``` LATENCY RESET [event [event ...]] ``` Available since: 2.8.13 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, The `LATENCY RESET` command resets the latency spikes time series of all, or only some, events. When the command is called without arguments, it resets all the events, discarding the currently logged latency spike events, and resetting the maximum event time register. It is possible to reset only specific events by providing the `event` names as arguments. Valid values for `event` are: * `active-defrag-cycle` * `aof-fsync-always` * `aof-stat` * `aof-rewrite-diff-write` * `aof-rename` * `aof-write` * `aof-write-active-child` * `aof-write-alone` * `aof-write-pending-fsync` * `command` * `expire-cycle` * `eviction-cycle` * `eviction-del` * `fast-command` * `fork` * `rdb-unlink-temp-file` For more information refer to the [Latency Monitoring Framework page](https://redis.io/topics/latency-monitor). Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of event time series that were reset. redis GETDEL GETDEL ====== ``` GETDEL ``` Syntax ``` GETDEL key ``` Available since: 6.2.0 Time complexity: O(1) ACL categories: `@write`, `@string`, `@fast`, Get the value of `key` and delete the key. This command is similar to [`GET`](../get), except for the fact that it also deletes the key on success (if and only if the key's value type is a string). Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the value of `key`, `nil` when `key` does not exist, or an error if the key's value type isn't a string. Examples -------- ``` SET mykey "Hello" GETDEL mykey GET mykey ``` redis ZREVRANK ZREVRANK ======== ``` ZREVRANK ``` Syntax ``` ZREVRANK key member [WITHSCORE] ``` Available since: 2.0.0 Time complexity: O(log(N)) ACL categories: `@read`, `@sortedset`, `@fast`, Returns the rank of `member` in the sorted set stored at `key`, with the scores ordered from high to low. The rank (or index) is 0-based, which means that the member with the highest score has rank `0`. The optional `WITHSCORE` argument supplements the command's reply with the score of the element returned. Use [`ZRANK`](../zrank) to get the rank of an element with the scores ordered from low to high. Return ------ * If `member` exists in the sorted set: + using `WITHSCORE`, [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): an array containing the rank and score of `member`. + without using `WITHSCORE`, [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the rank of `member`. * If `member` does not exist in the sorted set or `key` does not exist: + using `WITHSCORE`, [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): `nil`. + without using `WITHSCORE`, [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): `nil`. Note that in RESP3 null and nullarray are the same, but in RESP2 they are not. Examples -------- ``` ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZREVRANK myzset "one" ZREVRANK myzset "four" ZREVRANK myzset "three" WITHSCORE ZREVRANK myzset "four" WITHSCORE ``` History ------- * Starting with Redis version 7.2.0: Added the optional `WITHSCORE` argument. redis DUMP DUMP ==== ``` DUMP ``` Syntax ``` DUMP key ``` Available since: 2.6.0 Time complexity: O(1) to access the key and additional O(N\*M) to serialize it, where N is the number of Redis objects composing the value and M their average size. For small string values the time complexity is thus O(1)+O(1\*M) where M is small, so simply O(1). ACL categories: `@keyspace`, `@read`, `@slow`, Serialize the value stored at key in a Redis-specific format and return it to the user. The returned value can be synthesized back into a Redis key using the [`RESTORE`](../restore) command. The serialization format is opaque and non-standard, however it has a few semantic characteristics: * It contains a 64-bit checksum that is used to make sure errors will be detected. The [`RESTORE`](../restore) command makes sure to check the checksum before synthesizing a key using the serialized value. * Values are encoded in the same format used by RDB. * An RDB version is encoded inside the serialized value, so that different Redis versions with incompatible RDB formats will refuse to process the serialized value. The serialized value does NOT contain expire information. In order to capture the time to live of the current value the [`PTTL`](../pttl) command should be used. If `key` does not exist a nil bulk reply is returned. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the serialized value. Examples -------- ``` > SET mykey 10 OK > DUMP mykey "\x00\xc0\n\n\x00n\x9fWE\x0e\xaec\xbb" ```
programming_docs
redis CLIENT CLIENT ====== ``` CLIENT CACHING ``` Syntax ``` CLIENT CACHING <YES | NO> ``` Available since: 6.0.0 Time complexity: O(1) ACL categories: `@slow`, `@connection`, This command controls the tracking of the keys in the next command executed by the connection, when tracking is enabled in `OPTIN` or `OPTOUT` mode. Please check the [client side caching documentation](https://redis.io/topics/client-side-caching) for background information. When tracking is enabled Redis, using the [`CLIENT TRACKING`](../client-tracking) command, it is possible to specify the `OPTIN` or `OPTOUT` options, so that keys in read only commands are not automatically remembered by the server to be invalidated later. When we are in `OPTIN` mode, we can enable the tracking of the keys in the next command by calling `CLIENT CACHING yes` immediately before it. Similarly when we are in `OPTOUT` mode, and keys are normally tracked, we can avoid the keys in the next command to be tracked using `CLIENT CACHING no`. Basically the command sets a state in the connection, that is valid only for the next command execution, that will modify the behavior of client tracking. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` or an error if the argument is not yes or no. redis TOPK.INCRBY TOPK.INCRBY =========== ``` TOPK.INCRBY ``` Syntax ``` TOPK.INCRBY key item increment [item increment ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(n \* k \* incr) where n is the number of items, k is the depth and incr is the increment Increase the score of an item in the data structure by increment. Multiple items' score can be increased at once. If an item enters the Top-K list, the item which is expelled is returned. ### Parameters * **key**: Name of sketch where item is added. * **item**: Item/s to be added. * **increment**: increment to current item score. Increment must be greater or equal to 1. Increment is limited to 100,000 to avoid server freeze. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) - if an element was dropped from the TopK list, [Null reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings) otherwise.. @example ``` redis> TOPK.INCRBY topk foo 3 bar 2 42 30 1) (nil) 2) (nil) 3) foo ``` redis BF.CARD BF.CARD ======= ``` BF.CARD ``` Syntax ``` BF.CARD key ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.4.4](https://redis.io/docs/stack/bloom) Time complexity: O(1) Returns the cardinality of a Bloom filter - number of items that were added to a Bloom filter and detected as unique (items that caused at least one bit to be set in at least one sub-filter) (since RedisBloom 2.4.4) ### Parameters * **key**: The name of the filter Return ------ * [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - the number of items that were added to this Bloom filter and detected as unique (items that caused at least one bit to be set in at least one sub-filter). * 0 when `key` does not exist. * Error when `key` is of a type other than Bloom filter. Note: when `key` exists - return the same value as `BF.INFO key ITEMS`. Examples -------- ``` redis> BF.ADD bf1 item_foo (integer) 1 redis> BF.CARD bf1 (integer) 1 redis> BF.CARD bf_new (integer) 0 ``` redis BITFIELD_RO BITFIELD\_RO ============ ``` BITFIELD_RO ``` Syntax ``` BITFIELD_RO key [GET encoding offset [GET encoding offset ...]] ``` Available since: 6.0.0 Time complexity: O(1) for each subcommand specified ACL categories: `@read`, `@bitmap`, `@fast`, Read-only variant of the [`BITFIELD`](../bitfield) command. It is like the original [`BITFIELD`](../bitfield) but only accepts `GET` subcommand and can safely be used in read-only replicas. Since the original [`BITFIELD`](../bitfield) has `SET` and `INCRBY` options it is technically flagged as a writing command in the Redis command table. For this reason read-only replicas in a Redis Cluster will redirect it to the master instance even if the connection is in read-only mode (see the [`READONLY`](../readonly) command of Redis Cluster). Since Redis 6.2, the `BITFIELD_RO` variant was introduced in order to allow [`BITFIELD`](../bitfield) behavior in read-only replicas without breaking compatibility on command flags. See original [`BITFIELD`](../bitfield) for more details. Examples -------- ``` BITFIELD_RO hello GET i8 16 ``` Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): An array with each entry being the corresponding result of the subcommand given at the same position. redis TDIGEST.REVRANK TDIGEST.REVRANK =============== ``` TDIGEST.REVRANK ``` Syntax ``` TDIGEST.REVRANK key value [value ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.4.0](https://redis.io/docs/stack/bloom) Time complexity: O(1) Returns, for each input value (floating-point), the estimated reverse rank of the value (the number of observations in the sketch that are larger than the value + half the number of observations that are equal to the value). Multiple reverse ranks can be retrieved in a signle call. Required arguments ------------------ `key` is key name for an existing t-digest sketch. `value` is input value for which the reverse rank should be estimated. Return value ------------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) - an array of integers populated with revrank\_1, revrank\_2, ..., revrank\_V: * -1 - when `value` is larger than the value of the largest observation. * The number of observations - when `value` is smaller than the value of the smallest observation. * Otherwise: an estimation of the number of (observations larger than `value` + half the observations equal to `value`). 0 is the reverse rank of the value of the largest observation. *n*-1 is the reverse rank of the value of the smallest observation; *n* denotes the number of observations added to the sketch. All values are -2 if the sketch is empty. Examples -------- ``` redis> TDIGEST.CREATE s COMPRESSION 1000 OK redis> TDIGEST.ADD s 10 20 30 40 50 60 OK redis> TDIGEST.RANK s 0 10 20 30 40 50 60 70 1) (integer) -1 2) (integer) 0 3) (integer) 1 4) (integer) 2 5) (integer) 3 6) (integer) 4 7) (integer) 5 8) (integer) 6 redis> TDIGEST.REVRANK s 0 10 20 30 40 50 60 70 1) (integer) 6 2) (integer) 5 3) (integer) 4 4) (integer) 3 5) (integer) 2 6) (integer) 1 7) (integer) 0 8) (integer) -1 ``` ``` redis> TDIGEST.CREATE s COMPRESSION 1000 OK redis> TDIGEST.ADD s 10 10 10 10 20 20 OK redis> TDIGEST.RANK s 10 20 1) (integer) 2 2) (integer) 5 redis> TDIGEST.REVRANK s 10 20 1) (integer) 4 2) (integer) 1 ``` redis SCRIPT SCRIPT ====== ``` SCRIPT KILL ``` Syntax ``` SCRIPT KILL ``` Available since: 2.6.0 Time complexity: O(1) ACL categories: `@slow`, `@scripting`, Kills the currently executing [`EVAL`](../eval) script, assuming no write operation was yet performed by the script. This command is mainly useful to kill a script that is running for too much time(for instance, because it entered an infinite loop because of a bug). The script will be killed, and the client currently blocked into EVAL will see the command returning with an error. If the script has already performed write operations, it can not be killed in this way because it would violate Lua's script atomicity contract. In such a case, only `SHUTDOWN NOSAVE` can kill the script, killing the Redis process in a hard way and preventing it from persisting with half-written information. For more information about [`EVAL`](../eval) scripts please refer to [Introduction to Eval Scripts](https://redis.io/topics/eval-intro). Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) redis HDEL HDEL ==== ``` HDEL ``` Syntax ``` HDEL key field [field ...] ``` Available since: 2.0.0 Time complexity: O(N) where N is the number of fields to be removed. ACL categories: `@write`, `@hash`, `@fast`, Removes the specified fields from the hash stored at `key`. Specified fields that do not exist within this hash are ignored. If `key` does not exist, it is treated as an empty hash and this command returns `0`. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of fields that were removed from the hash, not including specified but non existing fields. Examples -------- ``` HSET myhash field1 "foo" HDEL myhash field1 HDEL myhash field2 ``` History ------- * Starting with Redis version 2.4.0: Accepts multiple `field` arguments. redis ECHO ECHO ==== ``` ECHO ``` Syntax ``` ECHO message ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@fast`, `@connection`, Returns `message`. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings) Examples -------- ``` ECHO "Hello World!" ``` redis HSETNX HSETNX ====== ``` HSETNX ``` Syntax ``` HSETNX key field value ``` Available since: 2.0.0 Time complexity: O(1) ACL categories: `@write`, `@hash`, `@fast`, Sets `field` in the hash stored at `key` to `value`, only if `field` does not yet exist. If `key` does not exist, a new key holding a hash is created. If `field` already exists, this operation has no effect. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: * `1` if `field` is a new field in the hash and `value` was set. * `0` if `field` already exists in the hash and no operation was performed. Examples -------- ``` HSETNX myhash field "Hello" HSETNX myhash field "World" HGET myhash field ``` redis LATENCY LATENCY ======= ``` LATENCY LATEST ``` Syntax ``` LATENCY LATEST ``` Available since: 2.8.13 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, The `LATENCY LATEST` command reports the latest latency events logged. Each reported event has the following fields: * Event name. * Unix timestamp of the latest latency spike for the event. * Latest event latency in millisecond. * All-time maximum latency for this event. "All-time" means the maximum latency since the Redis instance was started, or the time that events were reset [`LATENCY RESET`](../latency-reset). Examples -------- ``` 127.0.0.1:6379> debug sleep 1 OK (1.00s) 127.0.0.1:6379> debug sleep .25 OK 127.0.0.1:6379> latency latest 1) 1) "command" 2) (integer) 1405067976 3) (integer) 251 4) (integer) 1001 ``` For more information refer to the [Latency Monitoring Framework page](https://redis.io/topics/latency-monitor). Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): specifically: The command returns an array where each element is a four elements array representing the event's name, timestamp, latest and all-time latency measurements. redis FT.DICTDEL FT.DICTDEL ========== ``` FT.DICTDEL ``` Syntax ``` FT.DICTDEL dict term [term ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.4.0](https://redis.io/docs/stack/search) Time complexity: O(1) Delete terms from a dictionary [Examples](#examples) Required arguments ------------------ `dict` is dictionary name. `term` term to delete from the dictionary. Return ------ FT.DICTDEL returns an integer reply, the number of new terms that were deleted. Examples -------- **Delete terms from a dictionary** ``` 127.0.0.1:6379> FT.DICTDEL dict foo bar "hello world" (integer) 3 ``` See also -------- [`FT.DICTADD`](../ft.dictadd) | [`FT.DICTDUMP`](../ft.dictdump) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) redis CF.ADD CF.ADD ====== ``` CF.ADD ``` Syntax ``` CF.ADD key item ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(k + i), where k is the number of sub-filters and i is maxIterations Adds an item to the cuckoo filter, creating the filter if it does not exist. Cuckoo filters can contain the same item multiple times, and consider each insert as separate. You can use [`CF.ADDNX`](../cf.addnx) to only add the item if it does not exist yet. Keep in mind that deleting an element inserted using [`CF.ADDNX`](../cf.addnx) may cause false-negative errors. ### Parameters * **key**: The name of the filter * **item**: The item to add ### Complexity O(n + i), where n is the number of `sub-filters` and i is `maxIterations`. Adding items requires up to 2 memory accesses per `sub-filter`. But as the filter fills up, both locations for an item might be full. The filter attempts to `Cuckoo` swap items up to `maxIterations` times. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - "1" if executed correctly, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) otherwise. ``` redis> CF.ADD cf item (integer) 1 ``` redis TOPK.INFO TOPK.INFO ========= ``` TOPK.INFO ``` Syntax ``` TOPK.INFO key ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(1) Returns number of required items (k), width, depth and decay values. ### Parameters * **key**: Name of sketch. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) with information of the filter. Examples -------- ``` TOPK.INFO topk 1) k 2) (integer) 50 3) width 4) (integer) 2000 5) depth 6) (integer) 7 7) decay 8) "0.92500000000000004" ``` redis SUBSTR SUBSTR ====== ``` SUBSTR (deprecated) ``` As of Redis version 2.0.0, this command is regarded as deprecated. It can be replaced by [`GETRANGE`](../getrange) when migrating or writing new code. Syntax ``` SUBSTR key start end ``` Available since: 1.0.0 Time complexity: O(N) where N is the length of the returned string. The complexity is ultimately determined by the returned length, but because creating a substring from an existing string is very cheap, it can be considered O(1) for small strings. ACL categories: `@read`, `@string`, `@slow`, Returns the substring of the string value stored at `key`, determined by the offsets `start` and `end` (both are inclusive). Negative offsets can be used in order to provide an offset starting from the end of the string. So -1 means the last character, -2 the penultimate and so forth. The function handles out of range requests by limiting the resulting range to the actual length of the string. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings) Examples -------- ``` SET mykey "This is a string" GETRANGE mykey 0 3 GETRANGE mykey -3 -1 GETRANGE mykey 0 -1 GETRANGE mykey 10 100 ``` redis CLUSTER CLUSTER ======= ``` CLUSTER ADDSLOTSRANGE ``` Syntax ``` CLUSTER ADDSLOTSRANGE start-slot end-slot [start-slot end-slot ...] ``` Available since: 7.0.0 Time complexity: O(N) where N is the total number of the slots between the start slot and end slot arguments. ACL categories: `@admin`, `@slow`, `@dangerous`, The `CLUSTER ADDSLOTSRANGE` is similar to the [`CLUSTER ADDSLOTS`](../cluster-addslots) command in that they both assign hash slots to nodes. The difference between the two commands is that `ADDSLOTS` takes a list of slots to assign to the node, while `ADDSLOTSRANGE` takes a list of slot ranges (specified by start and end slots) to assign to the node. Example ------- To assign slots 1 2 3 4 5 to the node, the `ADDSLOTS` command is: ``` > CLUSTER ADDSLOTS 1 2 3 4 5 OK ``` The same operation can be completed with the following `ADDSLOTSRANGE` command: ``` > CLUSTER ADDSLOTSRANGE 1 5 OK ``` Usage in Redis Cluster ---------------------- This command only works in cluster mode and is useful in the following Redis Cluster operations: 1. To create a new cluster ADDSLOTSRANGE is used in order to initially setup master nodes splitting the available hash slots among them. 2. In order to fix a broken cluster where certain slots are unassigned. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if the command was successful. Otherwise an error is returned. redis JSON.NUMINCRBY JSON.NUMINCRBY ============== ``` JSON.NUMINCRBY ``` Syntax ``` JSON.NUMINCRBY key path value ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: O(1) when path is evaluated to a single value, O(N) when path is evaluated to multiple values, where N is the size of the key Increment the number value stored at `path` by `number` [Examples](#examples) Required arguments ------------------ `key` is key to modify. `path` is JSONPath to specify. `value` is number value to increment. Return ------ JSON.NUMINCRBY returns a bulk string reply specified as a stringified new value for each path, or `nil`, if the matching JSON value is not a number. For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Examples -------- **Increment number values** Create a document. ``` 127.0.0.1:6379> JSON.SET doc . '{"a":"b","b":[{"a":2}, {"a":5}, {"a":"c"}]}' OK ``` Increment a value of `a` object by 2. The command fails to find a number and returns `null`. ``` 127.0.0.1:6379> JSON.NUMINCRBY doc $.a 2 "[null]" ``` Recursively find and increment a value of all `a` objects. The command increments numbers it finds and returns `null` for nonnumber values. ``` 127.0.0.1:6379> JSON.NUMINCRBY doc $..a 2 "[null,4,7,null]" ``` See also -------- [`JSON.ARRINDEX`](../json.arrindex) | [`JSON.ARRINSERT`](../json.arrinsert) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json) redis LATENCY LATENCY ======= ``` LATENCY GRAPH ``` Syntax ``` LATENCY GRAPH event ``` Available since: 2.8.13 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, Produces an ASCII-art style graph for the specified event. `LATENCY GRAPH` lets you intuitively understand the latency trend of an `event` via state-of-the-art visualization. It can be used for quickly grasping the situation before resorting to means such parsing the raw data from [`LATENCY HISTORY`](../latency-history) or external tooling. Valid values for `event` are: * `active-defrag-cycle` * `aof-fsync-always` * `aof-stat` * `aof-rewrite-diff-write` * `aof-rename` * `aof-write` * `aof-write-active-child` * `aof-write-alone` * `aof-write-pending-fsync` * `command` * `expire-cycle` * `eviction-cycle` * `eviction-del` * `fast-command` * `fork` * `rdb-unlink-temp-file` Examples -------- ``` 127.0.0.1:6379> latency reset command (integer) 0 127.0.0.1:6379> debug sleep .1 OK 127.0.0.1:6379> debug sleep .2 OK 127.0.0.1:6379> debug sleep .3 OK 127.0.0.1:6379> debug sleep .5 OK 127.0.0.1:6379> debug sleep .4 OK 127.0.0.1:6379> latency graph command command - high 500 ms, low 101 ms (all time high 500 ms) -------------------------------------------------------------------------------- #_ _|| _||| _|||| 11186 542ss sss ``` The vertical labels under each graph column represent the amount of seconds, minutes, hours or days ago the event happened. For example "15s" means that the first graphed event happened 15 seconds ago. The graph is normalized in the min-max scale so that the zero (the underscore in the lower row) is the minimum, and a # in the higher row is the maximum. For more information refer to the [Latency Monitoring Framework page](https://redis.io/topics/latency-monitor). Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings)
programming_docs
redis MEMORY MEMORY ====== ``` MEMORY STATS ``` Syntax ``` MEMORY STATS ``` Available since: 4.0.0 Time complexity: O(1) ACL categories: `@slow`, The `MEMORY STATS` command returns an [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) about the memory usage of the server. The information about memory usage is provided as metrics and their respective values. The following metrics are reported: * `peak.allocated`: Peak memory consumed by Redis in bytes (see [`INFO`](../info)'s `used_memory_peak`) * `total.allocated`: Total number of bytes allocated by Redis using its allocator (see [`INFO`](../info)'s `used_memory`) * `startup.allocated`: Initial amount of memory consumed by Redis at startup in bytes (see [`INFO`](../info)'s `used_memory_startup`) * `replication.backlog`: Size in bytes of the replication backlog (see [`INFO`](../info)'s `repl_backlog_active`) * `clients.slaves`: The total size in bytes of all replicas overheads (output and query buffers, connection contexts) * `clients.normal`: The total size in bytes of all clients overheads (output and query buffers, connection contexts) * `cluster.links`: Memory usage by cluster links (Added in Redis 7.0, see [`INFO`](../info)'s `mem_cluster_links`). * `aof.buffer`: The summed size in bytes of AOF related buffers. * `lua.caches`: the summed size in bytes of the overheads of the Lua scripts' caches * `dbXXX`: For each of the server's databases, the overheads of the main and expiry dictionaries (`overhead.hashtable.main` and `overhead.hashtable.expires`, respectively) are reported in bytes * `overhead.total`: The sum of all overheads, i.e. `startup.allocated`, `replication.backlog`, `clients.slaves`, `clients.normal`, `aof.buffer` and those of the internal data structures that are used in managing the Redis keyspace (see [`INFO`](../info)'s `used_memory_overhead`) * `keys.count`: The total number of keys stored across all databases in the server * `keys.bytes-per-key`: The ratio between **net memory usage** (`total.allocated` minus `startup.allocated`) and `keys.count` * `dataset.bytes`: The size in bytes of the dataset, i.e. `overhead.total` subtracted from `total.allocated` (see [`INFO`](../info)'s `used_memory_dataset`) * `dataset.percentage`: The percentage of `dataset.bytes` out of the net memory usage * `peak.percentage`: The percentage of `peak.allocated` out of `total.allocated` * `fragmentation`: See [`INFO`](../info)'s `mem_fragmentation_ratio` Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): nested list of memory usage metrics and their values **A note about the word slave used in this man page**: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated. redis JSON.ARRAPPEND JSON.ARRAPPEND ============== ``` JSON.ARRAPPEND ``` Syntax ``` JSON.ARRAPPEND key [path] value [value ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: O(1) when path is evaluated to a single value, O(N) when path is evaluated to multiple values, where N is the size of the key Append the `json` values into the array at `path` after the last element in it [Examples](#examples) Required arguments ------------------ `key` is key to modify. `value` is one or more values to append to one or more arrays. About using strings with JSON commands To specify a string as an array value to append, wrap the quoted string with an additional set of single quotes. Example: `'"silver"'`. For more detailed use, see [Examples](#examples). Optional arguments ------------------ `path` is JSONPath to specify. Default is root `$`. Return value ------------ `JSON.ARRAPEND` returns an [array](https://redis.io/docs/reference/protocol-spec/#resp-arrays) of integer replies for each path, the array's new size, or `nil`, if the matching JSON value is not an array. For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Examples -------- **Add a new color to a list of product colors** Create a document for noise-cancelling headphones in black and silver colors. ``` 127.0.0.1:6379> JSON.SET item:1 $ '{"name":"Noise-cancelling Bluetooth headphones","description":"Wireless Bluetooth headphones with noise-cancelling technology","connection":{"wireless":true,"type":"Bluetooth"},"price":99.98,"stock":25,"colors":["black","silver"]}' OK ``` Add color `blue` to the end of the `colors` array. `JSON.ARRAPEND` returns the array's new size. ``` 127.0.0.1:6379> JSON.ARRAPPEND item:1 $.colors '"blue"' 1) (integer) 3 ``` Return the new length of the `colors` array. ``` 127.0.0.1:6379> JSON.GET item:1 "{\"name\":\"Noise-cancelling Bluetooth headphones\",\"description\":\"Wireless Bluetooth headphones with noise-cancelling technology\",\"connection\":{\"wireless\":true,\"type\":\"Bluetooth\"},\"price\":99.98,\"stock\":25,\"colors\":[\"black\",\"silver\",\"blue\"]}" ``` See also -------- [`JSON.ARRINDEX`](../json.arrindex) | [`JSON.ARRINSERT`](../json.arrinsert) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json) redis JSON.ARRTRIM JSON.ARRTRIM ============ ``` JSON.ARRTRIM ``` Syntax ``` JSON.ARRTRIM key path start stop ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: O(N) when path is evaluated to a single value where N is the size of the array, O(N) when path is evaluated to multiple values, where N is the size of the key Trim an array so that it contains only the specified inclusive range of elements [Examples](#examples) Required arguments ------------------ `key` is key to modify. Optional arguments ------------------ `path` is JSONPath to specify. Default is root `$`. `start` is index of the first element to keep (previous elements are trimmed). Default is 0. `stop` is the index of the last element to keep (following elements are trimmed), including the last element. Default is 0. Negative values are interpreted as starting from the end. About out-of-range indexes JSON.ARRTRIM is extremely forgiving, and using it with out-of-range indexes does not produce an error. Note a few differences between how RedisJSON v2.0 and legacy versions handle out-of-range indexes. Behavior as of RedisJSON v2.0: * If `start` is larger than the array's size or `start` > `stop`, returns 0 and an empty array. * If `start` is < 0, then start from the end of the array. * If `stop` is larger than the end of the array, it is treated like the last element. Return ------ JSON.ARRTRIM returns an array of integer replies for each path, the array's new size, or `nil`, if the matching JSON value is not an array. For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Examples -------- **Trim an array to a specific set of values** Create two headphone products with maximum sound levels. ``` 127.0.0.1:6379> JSON.GET key $ "[[{\"name\":\"Healthy headphones\",\"description\":\"Wireless Bluetooth headphones with noise-cancelling technology\",\"connection\":{\"wireless\":true,\"type\":\"Bluetooth\"},\"price\":99.98,\"stock\":25,\"colors\":[\"black\",\"silver\"],\"max\_level\":[60,70,80]},{\"name\":\"Noisy headphones\",\"description\":\"Wireless Bluetooth headphones with noise-cancelling technology\",\"connection\":{\"wireless\":true,\"type\":\"Bluetooth\"},\"price\":99.98,\"stock\":25,\"colors\":[\"black\",\"silver\"],\"max\_level\":[85,90,100,120]}]]" OK ``` Add new sound level values to the second product. ``` 127.0.0.1:6379> JSON.ARRAPPEND key $.[1].max\_level 140 160 180 200 220 240 260 280 1) (integer) 12 ``` Get the updated array. ``` 127.0.0.1:6379> JSON.GET key $.[1].max\_level "[[85,90,100,120,140,160,180,200,220,240,260,280]]" ``` Keep only the values between the fifth and the ninth element, inclusive of that last element. ``` 127.0.0.1:6379> JSON.ARRTRIM key $.[1].max\_level 4 8 1) (integer) 5 ``` Get the updated array. ``` 127.0.0.1:6379> JSON.GET key $.[1].max\_level "[[140,160,180,200,220]]" ``` See also -------- [`JSON.ARRINDEX`](../json.arrindex) | [`JSON.ARRINSERT`](../json.arrinsert) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json) redis OBJECT OBJECT ====== ``` OBJECT REFCOUNT ``` Syntax ``` OBJECT REFCOUNT key ``` Available since: 2.2.3 Time complexity: O(1) ACL categories: `@keyspace`, `@read`, `@slow`, This command returns the reference count of the stored at `<key>`. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) The number of references. redis TTL TTL === ``` TTL ``` Syntax ``` TTL key ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@keyspace`, `@read`, `@fast`, Returns the remaining time to live of a key that has a timeout. This introspection capability allows a Redis client to check how many seconds a given key will continue to be part of the dataset. In Redis 2.6 or older the command returns `-1` if the key does not exist or if the key exist but has no associated expire. Starting with Redis 2.8 the return value in case of error changed: * The command returns `-2` if the key does not exist. * The command returns `-1` if the key exists but has no associated expire. See also the [`PTTL`](../pttl) command that returns the same information with milliseconds resolution (Only available in Redis 2.6 or greater). Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): TTL in seconds, or a negative value in order to signal an error (see the description above). Examples -------- ``` SET mykey "Hello" EXPIRE mykey 10 TTL mykey ``` History ------- * Starting with Redis version 2.8.0: Added the -2 reply. redis FLUSHDB FLUSHDB ======= ``` FLUSHDB ``` Syntax ``` FLUSHDB [ASYNC | SYNC] ``` Available since: 1.0.0 Time complexity: O(N) where N is the number of keys in the selected database ACL categories: `@keyspace`, `@write`, `@slow`, `@dangerous`, Delete all the keys of the currently selected DB. This command never fails. By default, `FLUSHDB` will synchronously flush all keys from the database. Starting with Redis 6.2, setting the **lazyfree-lazy-user-flush** configuration directive to "yes" changes the default flush mode to asynchronous. It is possible to use one of the following modifiers to dictate the flushing mode explicitly: * `ASYNC`: flushes the database asynchronously * `SYNC`: flushes the database synchronously Note: an asynchronous `FLUSHDB` command only deletes keys that were present at the time the command was invoked. Keys created during an asynchronous flush will be unaffected. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) Behavior change history ----------------------- * `>= 6.2.0`: Default flush behavior now configurable by the **lazyfree-lazy-user-flush** configuration directive. History ------- * Starting with Redis version 4.0.0: Added the `ASYNC` flushing mode modifier. * Starting with Redis version 6.2.0: Added the `SYNC` flushing mode modifier. redis SHUTDOWN SHUTDOWN ======== ``` SHUTDOWN ``` Syntax ``` SHUTDOWN [NOSAVE | SAVE] [NOW] [FORCE] [ABORT] ``` Available since: 1.0.0 Time complexity: O(N) when saving, where N is the total number of keys in all databases when saving data, otherwise O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, The command behavior is the following: * If there are any replicas lagging behind in replication: + Pause clients attempting to write by performing a [`CLIENT PAUSE`](../client-pause) with the `WRITE` option. + Wait up to the configured `shutdown-timeout` (default 10 seconds) for replicas to catch up the replication offset. * Stop all the clients. * Perform a blocking SAVE if at least one **save point** is configured. * Flush the Append Only File if AOF is enabled. * Quit the server. If persistence is enabled this commands makes sure that Redis is switched off without any data loss. Note: A Redis instance that is configured for not persisting on disk (no AOF configured, nor "save" directive) will not dump the RDB file on `SHUTDOWN`, as usually you don't want Redis instances used only for caching to block on when shutting down. Also note: If Redis receives one of the signals `SIGTERM` and `SIGINT`, the same shutdown sequence is performed. See also [Signal Handling](https://redis.io/topics/signals). Modifiers --------- It is possible to specify optional modifiers to alter the behavior of the command. Specifically: * **SAVE** will force a DB saving operation even if no save points are configured. * **NOSAVE** will prevent a DB saving operation even if one or more save points are configured. * **NOW** skips waiting for lagging replicas, i.e. it bypasses the first step in the shutdown sequence. * **FORCE** ignores any errors that would normally prevent the server from exiting. For details, see the following section. * **ABORT** cancels an ongoing shutdown and cannot be combined with other flags. Conditions where a SHUTDOWN fails --------------------------------- When a save point is configured or the **SAVE** modifier is specified, the shutdown may fail if the RDB file can't be saved. Then, the server continues to run in order to ensure no data loss. This may be bypassed using the **FORCE** modifier, causing the server to exit anyway. When the Append Only File is enabled the shutdown may fail because the system is in a state that does not allow to safely immediately persist on disk. Normally if there is an AOF child process performing an AOF rewrite, Redis will simply kill it and exit. However, there are situations where it is unsafe to do so and, unless the **FORCE** modifier is specified, the **SHUTDOWN** command will be refused with an error instead. This happens in the following situations: * The user just turned on AOF, and the server triggered the first AOF rewrite in order to create the initial AOF file. In this context, stopping will result in losing the dataset at all: once restarted, the server will potentially have AOF enabled without having any AOF file at all. * A replica with AOF enabled, reconnected with its master, performed a full resynchronization, and restarted the AOF file, triggering the initial AOF creation process. In this case not completing the AOF rewrite is dangerous because the latest dataset received from the master would be lost. The new master can actually be even a different instance (if the **REPLICAOF** or **SLAVEOF** command was used in order to reconfigure the replica), so it is important to finish the AOF rewrite and start with the correct data set representing the data set in memory when the server was terminated. There are situations when we want just to terminate a Redis instance ASAP, regardless of what its content is. In such a case, the command **SHUTDOWN NOW NOSAVE FORCE** can be used. In versions before 7.0, where the **NOW** and **FORCE** flags are not available, the right combination of commands is to send a **CONFIG appendonly no** followed by a **SHUTDOWN NOSAVE**. The first command will turn off the AOF if needed, and will terminate the AOF rewriting child if there is one active. The second command will not have any problem to execute since the AOF is no longer enabled. Minimize the risk of data loss ------------------------------ Since Redis 7.0, the server waits for lagging replicas up to a configurable `shutdown-timeout`, by default 10 seconds, before shutting down. This provides a best effort minimizing the risk of data loss in a situation where no save points are configured and AOF is disabled. Before version 7.0, shutting down a heavily loaded master node in a diskless setup was more likely to result in data loss. To minimize the risk of data loss in such setups, it's advised to trigger a manual [`FAILOVER`](../failover) (or [`CLUSTER FAILOVER`](../cluster-failover)) to demote the master to a replica and promote one of the replicas to be the new master, before shutting down a master node. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if `ABORT` was specified and shutdown was aborted. On successful shutdown, nothing is returned since the server quits and the connection is closed. On failure, an error is returned. Behavior change history ----------------------- * `>= 7.0.0`: Introduced waiting for lagging replicas before exiting. History ------- * Starting with Redis version 7.0.0: Added the `NOW`, `FORCE` and `ABORT` modifiers. redis TS.INFO TS.INFO ======= ``` TS.INFO ``` Syntax ``` TS.INFO key [DEBUG] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [TimeSeries 1.0.0](https://redis.io/docs/stack/timeseries) Time complexity: O(1) Return information and statistics for a time series. [Examples](#examples) Required arguments ------------------ `key` is key name of the time series. Optional arguments ------------------ `[DEBUG]` is an optional flag to get a more detailed information about the chunks. Return value ------------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) with information about the time series (name-value pairs): | Name[Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) | Description | | --- | --- | | `totalSamples` | [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) Total number of samples in this time series | | `memoryUsage` | [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) Total number of bytes allocated for this time series, which is the sum of - The memory used for storing the series' configuration parameters (retention period, duplication policy, etc.)- The memory used for storing the series' compaction rules- The memory used for storing the series' labels (key-value pairs)- The memory used for storing the chunks (chunk header + compressed/uncompressed data) | | `firstTimestamp` | [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) First timestamp present in this time series | | `lastTimestamp` | [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) Last timestamp present in this time series | | `retentionTime` | [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) The retention period, in milliseconds, for this time series | | `chunkCount` | [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) Number of chunks used for this time series | | `chunkSize` | [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) The initial allocation size, in bytes, for the data part of each new chunk.Actual chunks may consume more memory. Changing the chunk size (using [`TS.ALTER`](../ts.alter)) does not affect existing chunks. | | `chunkType` | [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) The chunks type: `compressed` or `uncompressed` | | `duplicatePolicy` | [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) or [Null reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings) The [duplicate policy](https://redis.io/docs/stack/timeseries/configuration/#duplicate_policy) of this time series | | `labels` | [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) or [Null reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings) Metadata labels of this time series Each element is a 2-elements [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of ([Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings), [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings)) representing (label, value) | | `sourceKey` | [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings) or [Null reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings)Key name for source time series in case the current series is a target of a [compaction rule](../ts.createrule/index) | | `rules` | [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) [Compaction rules](../ts.createrule/index) defined in this time series Each rule is an [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) with 4 elements:- [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): The compaction key- [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): The bucket duration- [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): The aggregator- [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): The alignment (since RedisTimeSeries v1.8) | When [`DEBUG`](../debug) is specified, the response also contains: | Name[Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) | Description | | --- | --- | | `keySelfName` | [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings) Name of the key | | `Chunks` | [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) with information about the chunksEach element is an [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of information about a single chunk in a name([Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings))-value pairs:- `startTimestamp` - [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - First timestamp present in the chunk- `endTimestamp` - [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - Last timestamp present in the chunk- `samples` - [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - Total number of samples in the chunk- `size` - [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - the chunk's internal data size (without overheads) in bytes- `bytesPerSample` - [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings) (double) - Ratio of `size` and `samples` | Examples -------- **Find information about a temperature/humidity time series by location and sensor type** Create a set of sensors to measure temperature and humidity in your study and kitchen. ``` 127.0.0.1:6379> TS.CREATE telemetry:study:temperature LABELS room study type temperature OK 127.0.0.1:6379> TS.CREATE telemetry:study:humidity LABELS room study type humidity OK 127.0.0.1:6379> TS.CREATE telemetry:kitchen:temperature LABELS room kitchen type temperature OK 127.0.0.1:6379> TS.CREATE telemetry:kitchen:humidity LABELS room kitchen type humidity OK ``` Find information about the time series for temperature in the kitchen. ``` 127.0.0.1:6379> TS.INFO telemetry:kitchen:temperature 1) totalSamples 2) (integer) 0 3) memoryUsage 4) (integer) 4246 5) firstTimestamp 6) (integer) 0 7) lastTimestamp 8) (integer) 0 9) retentionTime 10) (integer) 0 11) chunkCount 12) (integer) 1 13) chunkSize 14) (integer) 4096 15) chunkType 16) compressed 17) duplicatePolicy 18) (nil) 19) labels 20) 1) 1) "room" 2) "kitchen" 2) 1) "type" 2) "temperature" 21) sourceKey 22) (nil) 23) rules 24) (empty array) ``` Query the time series using DEBUG to get more information about the chunks. ``` 127.0.0.1:6379> TS.INFO telemetry:kitchen:temperature DEBUG 1) totalSamples 2) (integer) 0 3) memoryUsage 4) (integer) 4246 5) firstTimestamp 6) (integer) 0 7) lastTimestamp 8) (integer) 0 9) retentionTime 10) (integer) 0 11) chunkCount 12) (integer) 1 13) chunkSize 14) (integer) 4096 15) chunkType 16) compressed 17) duplicatePolicy 18) (nil) 19) labels 20) 1) 1) "room" 2) "kitchen" 2) 1) "type" 2) "temperature" 21) sourceKey 22) (nil) 23) rules 24) (empty array) 25) keySelfName 26) "telemetry:kitchen:temperature" 27) Chunks 28) 1) 1) startTimestamp 2) (integer) 0 3) endTimestamp 4) (integer) 0 5) samples 6) (integer) 0 7) size 8) (integer) 4096 9) bytesPerSample 10) "inf" ``` See also -------- [`TS.RANGE`](../ts.range) | [`TS.QUERYINDEX`](../ts.queryindex) | [`TS.GET`](../ts.get) Related topics -------------- [RedisTimeSeries](https://redis.io/docs/stack/timeseries)
programming_docs
redis BITOP BITOP ===== ``` BITOP ``` Syntax ``` BITOP <AND | OR | XOR | NOT> destkey key [key ...] ``` Available since: 2.6.0 Time complexity: O(N) ACL categories: `@write`, `@bitmap`, `@slow`, Perform a bitwise operation between multiple keys (containing string values) and store the result in the destination key. The `BITOP` command supports four bitwise operations: **AND**, **OR**, **XOR** and **NOT**, thus the valid forms to call the command are: * `BITOP AND destkey srckey1 srckey2 srckey3 ... srckeyN` * `BITOP OR destkey srckey1 srckey2 srckey3 ... srckeyN` * `BITOP XOR destkey srckey1 srckey2 srckey3 ... srckeyN` * `BITOP NOT destkey srckey` As you can see **NOT** is special as it only takes an input key, because it performs inversion of bits so it only makes sense as a unary operator. The result of the operation is always stored at `destkey`. Handling of strings with different lengths ------------------------------------------ When an operation is performed between strings having different lengths, all the strings shorter than the longest string in the set are treated as if they were zero-padded up to the length of the longest string. The same holds true for non-existent keys, that are considered as a stream of zero bytes up to the length of the longest string. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) The size of the string stored in the destination key, that is equal to the size of the longest input string. Examples -------- ``` SET key1 "foobar" SET key2 "abcdef" BITOP AND dest key1 key2 GET dest ``` Pattern: real time metrics using bitmaps ---------------------------------------- `BITOP` is a good complement to the pattern documented in the [`BITCOUNT`](../bitcount) command documentation. Different bitmaps can be combined in order to obtain a target bitmap where the population counting operation is performed. See the article called "[Fast easy realtime metrics using Redis bitmaps](http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps)" for an interesting use cases. Performance considerations -------------------------- `BITOP` is a potentially slow command as it runs in O(N) time. Care should be taken when running it against long input strings. For real-time metrics and statistics involving large inputs a good approach is to use a replica (with replica-read-only option enabled) where the bit-wise operations are performed to avoid blocking the master instance. redis JSON.ARRINDEX JSON.ARRINDEX ============= ``` JSON.ARRINDEX ``` Syntax ``` JSON.ARRINDEX key path value [start [stop]] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: O(N) when path is evaluated to a single value where N is the size of the array, O(N) when path is evaluated to multiple values, where N is the size of the key Search for the first occurrence of a JSON value in an array [Examples](#examples) Required arguments ------------------ `key` is key to parse. `path` is JSONPath to specify. `value` is value to find its index in one or more arrays. About using strings with JSON commands To specify a string as an array value to index, wrap the quoted string with an additional set of single quotes. Example: `'"silver"'`. For more detailed use, see [Examples](#examples). Optional arguments ------------------ `start` is inclusive start value to specify in a slice of the array to search. Default is `0`. `stop` is exclusive stop value to specify in a slice of the array to search, including the last element. Default is `0`. Negative values are interpreted as starting from the end. About out-of-range indexes Out-of-range indexes round to the array's start and end. An inverse index range (such as the range from 1 to 0) returns unfound or `-1`. Return value ------------ `JSON.ARRINDEX` returns an [array](https://redis.io/docs/reference/protocol-spec/#resp-arrays) of integer replies for each path, the first position in the array of each JSON value that matches the path, `-1` if unfound in the array, or `nil`, if the matching JSON value is not an array. For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Examples -------- **Find the specific place of a color in a list of product colors** Create a document for noise-cancelling headphones in black and silver colors. ``` 127.0.0.1:6379> JSON.SET item:1 $ '{"name":"Noise-cancelling Bluetooth headphones","description":"Wireless Bluetooth headphones with noise-cancelling technology","connection":{"wireless":true,"type":"Bluetooth"},"price":99.98,"stock":25,"colors":["black","silver"]}' OK ``` Add color `blue` to the end of the `colors` array. `JSON.ARRAPEND` returns the array's new size. ``` 127.0.0.1:6379> JSON.ARRAPPEND item:1 $.colors '"blue"' 1) (integer) 3 ``` Return the new length of the `colors` array. ``` JSON.GET item:1 "{\"name\":\"Noise-cancelling Bluetooth headphones\",\"description\":\"Wireless Bluetooth headphones with noise-cancelling technology\",\"connection\":{\"wireless\":true,\"type\":\"Bluetooth\"},\"price\":99.98,\"stock\":25,\"colors\":[\"black\",\"silver\",\"blue\"]}" ``` Get the list of colors for the product. ``` 127.0.0.1:6379> JSON.GET item:1 '$.colors[\*]' "[\"black\",\"silver\",\"blue\"]" ``` Insert two more colors after the second color. You now have five colors. ``` 127.0.0.1:6379> JSON.ARRINSERT item:1 $.colors 2 '"yellow"' '"gold"' 1) (integer) 5 ``` Get the updated list of colors. ``` 127.0.0.1:6379> JSON.GET item:1 $.colors "[[\"black\",\"silver\",\"yellow\",\"gold\",\"blue\"]]" ``` Find the place where color `silver` is located. ``` 127.0.0.1:6379> JSON.ARRINDEX item:1 $..colors '"silver"' 1) (integer) 1 ``` See also -------- [`JSON.ARRAPPEND`](../json.arrappend) | [`JSON.ARRINSERT`](../json.arrinsert) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json) redis FT.EXPLAINCLI FT.EXPLAINCLI ============= ``` FT.EXPLAINCLI ``` Syntax ``` FT.EXPLAINCLI index query [DIALECT dialect] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.0.0](https://redis.io/docs/stack/search) Time complexity: O(1) Return the execution plan for a complex query but formatted for easier reading without using `redis-cli --raw` [Examples](#examples) Required arguments ------------------ `index` is index name. You must first create the index using [`FT.CREATE`](../ft.create). `query` is query string, as if sent to FT.SEARCH`. Optional arguments ------------------ `DIALECT {dialect_version}` is dialect version under which to execute the query. If not specified, the query executes under the default dialect version set during module initial loading or via [`FT.CONFIG SET`](../ft.config-set) command. Note In the returned response, a `+` on a term is an indication of stemming. Return ------ FT.EXPLAINCLI returns an array reply with a string representing the execution plan. Examples -------- **Return the execution plan for a complex query** ``` $ redis-cli 127.0.0.1:6379> FT.EXPLAINCLI rd "(foo bar)|(hello world) @date:[100 200]|@date:[500 +inf]" 1) INTERSECT { 2) UNION { 3) INTERSECT { 4) UNION { 5) foo 6) +foo(expanded) 7) } 8) UNION { 9) bar 10) +bar(expanded) 11) } 12) } 13) INTERSECT { 14) UNION { 15) hello 16) +hello(expanded) 17) } 18) UNION { 19) world 20) +world(expanded) 21) } 22) } 23) } 24) UNION { 25) NUMERIC {100.000000 <= @date <= 200.000000} 26) NUMERIC {500.000000 <= @date <= inf} 27) } 28) } 29) ``` See also -------- [`FT.CREATE`](../ft.create) | [`FT.SEARCH`](../ft.search) | [`FT.CONFIG SET`](../ft.config-set) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) redis FCALL FCALL ===== ``` FCALL ``` Syntax ``` FCALL function numkeys [key [key ...]] [arg [arg ...]] ``` Available since: 7.0.0 Time complexity: Depends on the function that is executed. ACL categories: `@slow`, `@scripting`, Invoke a function. Functions are loaded to the server with the [`FUNCTION LOAD`](../function-load) command. The first argument is the name of a loaded function. The second argument is the number of input key name arguments, followed by all the keys accessed by the function. In Lua, these names of input keys are available to the function as a table that is the callback's first argument. **Important:** To ensure the correct execution of functions, both in standalone and clustered deployments, all names of keys that a function accesses must be explicitly provided as input key arguments. The function **should only** access keys whose names are given as input arguments. Functions **should never** access keys with programmatically-generated names or based on the contents of data structures stored in the database. Any additional input argument **should not** represent names of keys. These are regular arguments and are passed in a Lua table as the callback's second argument. For more information please refer to the [Redis Programmability](https://redis.io/topics/programmability) and [Introduction to Redis Functions](https://redis.io/topics/functions-intro) pages. Examples -------- The following example will create a library named `mylib` with a single function, `myfunc`, that returns the first argument it gets. ``` redis> FUNCTION LOAD "#!lua name=mylib \n redis.register_function('myfunc', function(keys, args) return args[1] end)" "mylib" redis> FCALL myfunc 0 hello "hello" ``` redis ACL ACL === ``` ACL DRYRUN ``` Syntax ``` ACL DRYRUN username command [arg [arg ...]] ``` Available since: 7.0.0 Time complexity: O(1). ACL categories: `@admin`, `@slow`, `@dangerous`, Simulate the execution of a given command by a given user. This command can be used to test the permissions of a given user without having to enable the user or cause the side effects of running the command. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` on success. [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): An error describing why the user can't execute the command. Examples -------- ``` > ACL SETUSER VIRGINIA +SET ~* "OK" > ACL DRYRUN VIRGINIA SET foo bar "OK" > ACL DRYRUN VIRGINIA GET foo bar "This user has no permissions to run the 'GET' command" ``` redis TS.ALTER TS.ALTER ======== ``` TS.ALTER ``` Syntax ``` TS.ALTER key [RETENTION retentionPeriod] [CHUNK_SIZE size] [DUPLICATE_POLICY policy] [LABELS [{label value}...]] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [TimeSeries 1.0.0](https://redis.io/docs/stack/timeseries) Time complexity: O(N) where N is the number of labels requested to update Update the retention, chunk size, duplicate policy, and labels of an existing time series [Examples](#examples) Required arguments ------------------ `key` is key name for the time series. **Note:** This command alters only the specified element. For example, if you specify only `RETENTION` and `LABELS`, the chunk size and the duplicate policy are not altered. Optional arguments ------------------ `RETENTION retentionPeriod` is maximum retention period, compared to the maximum existing timestamp, in milliseconds. See `RETENTION` in [`TS.CREATE`](../ts.create). `CHUNK_SIZE size` is the initial allocation size, in bytes, for the data part of each new chunk. Actual chunks may consume more memory. See `CHUNK_SIZE` in [`TS.CREATE`](../ts.create). Changing this value does not affect existing chunks. `DUPLICATE_POLICY policy` is policy for handling multiple samples with identical timestamps. See `DUPLICATE_POLICY` in [`TS.CREATE`](../ts.create). `LABELS [{label value}...]` is set of label-value pairs that represent metadata labels of the key and serve as a secondary index. If `LABELS` is specified, the given label list is applied. Labels that are not present in the given list are removed implicitly. Specifying `LABELS` with no label-value pairs removes all existing labels. See `LABELS` in [`TS.CREATE`](../ts.create). Return value ------------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) - `OK` if executed correctly, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) otherwise. Examples -------- **Alter a temperature time series** Create a temperature time series. ``` 127.0.0.1:6379> TS.CREATE temperature:2:32 RETENTION 60000 DUPLICATE\_POLICY MAX LABELS sensor\_id 2 area\_id 32 OK ``` Alter the labels in the time series. ``` 127.0.0.1:6379> TS.ALTER temperature:2:32 LABELS sensor\_id 2 area\_id 32 sub\_area\_id 15 OK ``` See also -------- [`TS.CREATE`](../ts.create) Related topics -------------- [RedisTimeSeries](https://redis.io/docs/stack/timeseries) redis CONFIG CONFIG ====== ``` CONFIG SET ``` Syntax ``` CONFIG SET parameter value [parameter value ...] ``` Available since: 2.0.0 Time complexity: O(N) when N is the number of configuration parameters provided ACL categories: `@admin`, `@slow`, `@dangerous`, The `CONFIG SET` command is used in order to reconfigure the server at run time without the need to restart Redis. You can change both trivial parameters or switch from one to another persistence option using this command. The list of configuration parameters supported by `CONFIG SET` can be obtained issuing a `CONFIG GET *` command, that is the symmetrical command used to obtain information about the configuration of a running Redis instance. All the configuration parameters set using `CONFIG SET` are immediately loaded by Redis and will take effect starting with the next command executed. All the supported parameters have the same meaning of the equivalent configuration parameter used in the [redis.conf](http://github.com/redis/redis/raw/unstable/redis.conf) file. Note that you should look at the redis.conf file relevant to the version you're working with as configuration options might change between versions. The link above is to the latest development version. It is possible to switch persistence from RDB snapshotting to append-only file (and the other way around) using the `CONFIG SET` command. For more information about how to do that please check the [persistence page](https://redis.io/topics/persistence). In general what you should know is that setting the `appendonly` parameter to `yes` will start a background process to save the initial append-only file (obtained from the in memory data set), and will append all the subsequent commands on the append-only file, thus obtaining exactly the same effect of a Redis server that started with AOF turned on since the start. You can have both the AOF enabled with RDB snapshotting if you want, the two options are not mutually exclusive. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` when the configuration was set properly. Otherwise an error is returned. History ------- * Starting with Redis version 7.0.0: Added the ability to set multiple parameters in one call. redis HLEN HLEN ==== ``` HLEN ``` Syntax ``` HLEN key ``` Available since: 2.0.0 Time complexity: O(1) ACL categories: `@read`, `@hash`, `@fast`, Returns the number of fields contained in the hash stored at `key`. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): number of fields in the hash, or `0` when `key` does not exist. Examples -------- ``` HSET myhash field1 "Hello" HSET myhash field2 "World" HLEN myhash ``` redis PUBSUB PUBSUB ====== ``` PUBSUB NUMSUB ``` Syntax ``` PUBSUB NUMSUB [channel [channel ...]] ``` Available since: 2.8.0 Time complexity: O(N) for the NUMSUB subcommand, where N is the number of requested channels ACL categories: `@pubsub`, `@slow`, Returns the number of subscribers (exclusive of clients subscribed to patterns) for the specified channels. Note that it is valid to call this command without channels. In this case it will just return an empty list. Cluster note: in a Redis Cluster clients can subscribe to every node, and can also publish to every other node. The cluster will make sure that published messages are forwarded as needed. That said, [`PUBSUB`](../pubsub)'s replies in a cluster only report information from the node's Pub/Sub context, rather than the entire cluster. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a list of channels and number of subscribers for every channel. The format is channel, count, channel, count, ..., so the list is flat. The order in which the channels are listed is the same as the order of the channels specified in the command call. redis RENAMENX RENAMENX ======== ``` RENAMENX ``` Syntax ``` RENAMENX key newkey ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@keyspace`, `@write`, `@fast`, Renames `key` to `newkey` if `newkey` does not yet exist. It returns an error when `key` does not exist. In Cluster mode, both `key` and `newkey` must be in the same **hash slot**, meaning that in practice only keys that have the same hash tag can be reliably renamed in cluster. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: * `1` if `key` was renamed to `newkey`. * `0` if `newkey` already exists. Examples -------- ``` SET mykey "Hello" SET myotherkey "World" RENAMENX mykey myotherkey GET myotherkey ``` History ------- * Starting with Redis version 3.2.0: The command no longer returns an error when source and destination names are the same. redis CLUSTER CLUSTER ======= ``` CLUSTER ADDSLOTS ``` Syntax ``` CLUSTER ADDSLOTS slot [slot ...] ``` Available since: 3.0.0 Time complexity: O(N) where N is the total number of hash slot arguments ACL categories: `@admin`, `@slow`, `@dangerous`, This command is useful in order to modify a node's view of the cluster configuration. Specifically it assigns a set of hash slots to the node receiving the command. If the command is successful, the node will map the specified hash slots to itself, and will start broadcasting the new configuration. However note that: 1. The command only works if all the specified slots are, from the point of view of the node receiving the command, currently not assigned. A node will refuse to take ownership for slots that already belong to some other node (including itself). 2. The command fails if the same slot is specified multiple times. 3. As a side effect of the command execution, if a slot among the ones specified as argument is set as `importing`, this state gets cleared once the node assigns the (previously unbound) slot to itself. Example ------- For example the following command assigns slots 1 2 3 to the node receiving the command: ``` > CLUSTER ADDSLOTS 1 2 3 OK ``` However trying to execute it again results into an error since the slots are already assigned: ``` > CLUSTER ADDSLOTS 1 2 3 ERR Slot 1 is already busy ``` Usage in Redis Cluster ---------------------- This command only works in cluster mode and is useful in the following Redis Cluster operations: 1. To create a new cluster ADDSLOTS is used in order to initially setup master nodes splitting the available hash slots among them. 2. In order to fix a broken cluster where certain slots are unassigned. Information about slots propagation and warnings ------------------------------------------------ Note that once a node assigns a set of slots to itself, it will start propagating this information in heartbeat packet headers. However the other nodes will accept the information only if they have the slot as not already bound with another node, or if the configuration epoch of the node advertising the new hash slot, is greater than the node currently listed in the table. This means that this command should be used with care only by applications orchestrating Redis Cluster, like `redis-cli`, and the command if used out of the right context can leave the cluster in a wrong state or cause data loss. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if the command was successful. Otherwise an error is returned.
programming_docs
redis LOLWUT LOLWUT ====== ``` LOLWUT ``` Syntax ``` LOLWUT [VERSION version] ``` Available since: 5.0.0 Time complexity: ACL categories: `@read`, `@fast`, The LOLWUT command displays the Redis version: however as a side effect of doing so, it also creates a piece of generative computer art that is different with each version of Redis. The command was introduced in Redis 5 and announced with this [blog post](http://antirez.com/news/123). By default the `LOLWUT` command will display the piece corresponding to the current Redis version, however it is possible to display a specific version using the following form: ``` LOLWUT VERSION 5 ... other optional arguments ... ``` Of course the "5" above is an example. Each LOLWUT version takes a different set of arguments in order to change the output. The user is encouraged to play with it to discover how the output changes adding more numerical arguments. LOLWUT wants to be a reminder that there is more in programming than just putting some code together in order to create something useful. Every LOLWUT version should have the following properties: 1. It should display some computer art. There are no limits as long as the output works well in a normal terminal display. However the output should not be limited to graphics (like LOLWUT 5 and 6 actually do), but can be generative poetry and other non graphical things. 2. LOLWUT output should be completely useless. Displaying some useful Redis internal metrics does not count as a valid LOLWUT. 3. LOLWUT output should be fast to generate so that the command can be called in production instances without issues. It should remain fast even when the user experiments with odd parameters. 4. LOLWUT implementations should be safe and carefully checked for security, and resist to untrusted inputs if they take arguments. 5. LOLWUT must always display the Redis version at the end. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings) (or verbatim reply when using the RESP3 protocol): the string containing the generative computer art, and a text with the Redis version. redis SORT SORT ==== ``` SORT ``` Syntax ``` SORT key [BY pattern] [LIMIT offset count] [GET pattern [GET pattern ...]] [ASC | DESC] [ALPHA] [STORE destination] ``` Available since: 1.0.0 Time complexity: O(N+M\*log(M)) where N is the number of elements in the list or set to sort, and M the number of returned elements. When the elements are not sorted, complexity is O(N). ACL categories: `@write`, `@set`, `@sortedset`, `@list`, `@slow`, `@dangerous`, Returns or stores the elements contained in the [list](https://redis.io/topics/data-types#lists), [set](https://redis.io/topics/data-types#set) or [sorted set](https://redis.io/topics/data-types#sorted-sets) at `key`. There is also the [`SORT_RO`](../sort_ro) read-only variant of this command. By default, sorting is numeric and elements are compared by their value interpreted as double precision floating point number. This is `SORT` in its simplest form: ``` SORT mylist ``` Assuming `mylist` is a list of numbers, this command will return the same list with the elements sorted from small to large. In order to sort the numbers from large to small, use the `DESC` modifier: ``` SORT mylist DESC ``` When `mylist` contains string values and you want to sort them lexicographically, use the `ALPHA` modifier: ``` SORT mylist ALPHA ``` Redis is UTF-8 aware, assuming you correctly set the `LC_COLLATE` environment variable. The number of returned elements can be limited using the `LIMIT` modifier. This modifier takes the `offset` argument, specifying the number of elements to skip and the `count` argument, specifying the number of elements to return from starting at `offset`. The following example will return 10 elements of the sorted version of `mylist`, starting at element 0 (`offset` is zero-based): ``` SORT mylist LIMIT 0 10 ``` Almost all modifiers can be used together. The following example will return the first 5 elements, lexicographically sorted in descending order: ``` SORT mylist LIMIT 0 5 ALPHA DESC ``` Sorting by external keys ------------------------ Sometimes you want to sort elements using external keys as weights to compare instead of comparing the actual elements in the list, set or sorted set. Let's say the list `mylist` contains the elements `1`, `2` and `3` representing unique IDs of objects stored in `object_1`, `object_2` and `object_3`. When these objects have associated weights stored in `weight_1`, `weight_2` and `weight_3`, `SORT` can be instructed to use these weights to sort `mylist` with the following statement: ``` SORT mylist BY weight_* ``` The `BY` option takes a pattern (equal to `weight_*` in this example) that is used to generate the keys that are used for sorting. These key names are obtained substituting the first occurrence of `*` with the actual value of the element in the list (`1`, `2` and `3` in this example). Skip sorting the elements ------------------------- The `BY` option can also take a non-existent key, which causes `SORT` to skip the sorting operation. This is useful if you want to retrieve external keys (see the `GET` option below) without the overhead of sorting. ``` SORT mylist BY nosort ``` Retrieving external keys ------------------------ Our previous example returns just the sorted IDs. In some cases, it is more useful to get the actual objects instead of their IDs (`object_1`, `object_2` and `object_3`). Retrieving external keys based on the elements in a list, set or sorted set can be done with the following command: ``` SORT mylist BY weight_* GET object_* ``` The `GET` option can be used multiple times in order to get more keys for every element of the original list, set or sorted set. It is also possible to `GET` the element itself using the special pattern `#`: ``` SORT mylist BY weight_* GET object_* GET # ``` Restrictions for using external keys ------------------------------------ When enabling `Redis cluster-mode` there is no way to guarantee the existence of the external keys on the node which the command is processed on. In this case, any use of [`GET`](../get) or `BY` which reference external key pattern will cause the command to fail with an error. Starting from Redis 7.0, any use of [`GET`](../get) or `BY` which reference external key pattern will only be allowed in case the current user running the command has full key read permissions. Full key read permissions can be set for the user by, for example, specifying `'%R~*'` or `'~*` with the relevant command access rules. You can check the [`ACL SETUSER`](../acl-setuser) command manual for more information on setting ACL access rules. If full key read permissions aren't set, the command will fail with an error. Storing the result of a SORT operation -------------------------------------- By default, `SORT` returns the sorted elements to the client. With the `STORE` option, the result will be stored as a list at the specified key instead of being returned to the client. ``` SORT mylist BY weight_* STORE resultkey ``` An interesting pattern using `SORT ... STORE` consists in associating an [`EXPIRE`](../expire) timeout to the resulting key so that in applications where the result of a `SORT` operation can be cached for some time. Other clients will use the cached list instead of calling `SORT` for every request. When the key will timeout, an updated version of the cache can be created by calling `SORT ... STORE` again. Note that for correctly implementing this pattern it is important to avoid multiple clients rebuilding the cache at the same time. Some kind of locking is needed here (for instance using [`SETNX`](../setnx)). Using hashes in `BY` and `GET` ------------------------------ It is possible to use `BY` and `GET` options against hash fields with the following syntax: ``` SORT mylist BY weight_*->fieldname GET object_*->fieldname ``` The string `->` is used to separate the key name from the hash field name. The key is substituted as documented above, and the hash stored at the resulting key is accessed to retrieve the specified hash field. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): without passing the `store` option the command returns a list of sorted elements. [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): when the `store` option is specified the command returns the number of sorted elements in the destination list. redis GEOSEARCH GEOSEARCH ========= ``` GEOSEARCH ``` Syntax ``` GEOSEARCH key <FROMMEMBER member | FROMLONLAT longitude latitude> <BYRADIUS radius <M | KM | FT | MI> | BYBOX width height <M | KM | FT | MI>> [ASC | DESC] [COUNT count [ANY]] [WITHCOORD] [WITHDIST] [WITHHASH] ``` Available since: 6.2.0 Time complexity: O(N+log(M)) where N is the number of elements in the grid-aligned bounding box area around the shape provided as the filter and M is the number of items inside the shape ACL categories: `@read`, `@geo`, `@slow`, Return the members of a sorted set populated with geospatial information using [`GEOADD`](../geoadd), which are within the borders of the area specified by a given shape. This command extends the [`GEORADIUS`](../georadius) command, so in addition to searching within circular areas, it supports searching within rectangular areas. This command should be used in place of the deprecated [`GEORADIUS`](../georadius) and [`GEORADIUSBYMEMBER`](../georadiusbymember) commands. The query's center point is provided by one of these mandatory options: * `FROMMEMBER`: Use the position of the given existing `<member>` in the sorted set. * `FROMLONLAT`: Use the given `<longitude>` and `<latitude>` position. The query's shape is provided by one of these mandatory options: * `BYRADIUS`: Similar to [`GEORADIUS`](../georadius), search inside circular area according to given `<radius>`. * `BYBOX`: Search inside an axis-aligned rectangle, determined by `<height>` and `<width>`. The command optionally returns additional information using the following options: * `WITHDIST`: Also return the distance of the returned items from the specified center point. The distance is returned in the same unit as specified for the radius or height and width arguments. * `WITHCOORD`: Also return the longitude and latitude of the matching items. * `WITHHASH`: Also return the raw geohash-encoded sorted set score of the item, in the form of a 52 bit unsigned integer. This is only useful for low level hacks or debugging and is otherwise of little interest for the general user. Matching items are returned unsorted by default. To sort them, use one of the following two options: * `ASC`: Sort returned items from the nearest to the farthest, relative to the center point. * `DESC`: Sort returned items from the farthest to the nearest, relative to the center point. All matching items are returned by default. To limit the results to the first N matching items, use the **COUNT `<count>`** option. When the `ANY` option is used, the command returns as soon as enough matches are found. This means that the results returned may not be the ones closest to the specified point, but the effort invested by the server to generate them is significantly less. When `ANY` is not provided, the command will perform an effort that is proportional to the number of items matching the specified area and sort them, so to query very large areas with a very small `COUNT` option may be slow even if just a few results are returned. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays), specifically: * Without any `WITH` option specified, the command just returns a linear array like ["New York","Milan","Paris"]. * If `WITHCOORD`, `WITHDIST` or `WITHHASH` options are specified, the command returns an array of arrays, where each sub-array represents a single item. When additional information is returned as an array of arrays for each item, the first item in the sub-array is always the name of the returned item. The other information is returned in the following order as successive elements of the sub-array. 1. The distance from the center as a floating point number, in the same unit specified in the shape. 2. The geohash integer. 3. The coordinates as a two items x,y array (longitude,latitude). Examples -------- ``` GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" GEOADD Sicily 12.758489 38.788135 "edge1" 17.241510 38.788135 "edge2" GEOSEARCH Sicily FROMLONLAT 15 37 BYRADIUS 200 km ASC GEOSEARCH Sicily FROMLONLAT 15 37 BYBOX 400 400 km ASC WITHCOORD WITHDIST ``` History ------- * Starting with Redis version 7.0.0: Added support for uppercase unit names. redis XRANGE XRANGE ====== ``` XRANGE ``` Syntax ``` XRANGE key start end [COUNT count] ``` Available since: 5.0.0 Time complexity: O(N) with N being the number of elements being returned. If N is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(1). ACL categories: `@read`, `@stream`, `@slow`, The command returns the stream entries matching a given range of IDs. The range is specified by a minimum and maximum ID. All the entries having an ID between the two specified or exactly one of the two IDs specified (closed interval) are returned. The `XRANGE` command has a number of applications: * Returning items in a specific time range. This is possible because Stream IDs are [related to time](https://redis.io/topics/streams-intro). * Iterating a stream incrementally, returning just a few items at every iteration. However it is semantically much more robust than the [`SCAN`](../scan) family of functions. * Fetching a single entry from a stream, providing the ID of the entry to fetch two times: as start and end of the query interval. The command also has a reciprocal command returning items in the reverse order, called [`XREVRANGE`](../xrevrange), which is otherwise identical. `-` and `+` special IDs ------------------------ The `-` and `+` special IDs mean respectively the minimum ID possible and the maximum ID possible inside a stream, so the following command will just return every entry in the stream: ``` > XRANGE somestream - + 1) 1) 1526985054069-0 2) 1) "duration" 2) "72" 3) "event-id" 4) "9" 5) "user-id" 6) "839248" 2) 1) 1526985069902-0 2) 1) "duration" 2) "415" 3) "event-id" 4) "2" 5) "user-id" 6) "772213" ... other entries here ... ``` The `-` and `+` special IDs mean, respectively, the minimal and maximal range IDs, however they are nicer to type. Incomplete IDs -------------- Stream IDs are composed of two parts, a Unix millisecond time stamp and a sequence number for entries inserted in the same millisecond. It is possible to use `XRANGE` specifying just the first part of the ID, the millisecond time, like in the following example: ``` > XRANGE somestream 1526985054069 1526985055069 ``` In this case, `XRANGE` will auto-complete the start interval with `-0` and end interval with `-18446744073709551615`, in order to return all the entries that were generated between a given millisecond and the end of the other specified millisecond. This also means that repeating the same millisecond two times, we get all the entries within such millisecond, because the sequence number range will be from zero to the maximum. Used in this way `XRANGE` works as a range query command to obtain entries in a specified time. This is very handy in order to access the history of past events in a stream. Exclusive ranges ---------------- The range is close (inclusive) by default, meaning that the reply can include entries with IDs matching the query's start and end intervals. It is possible to specify an open interval (exclusive) by prefixing the ID with the character `(`. This is useful for iterating the stream, as explained below. Returning a maximum number of entries ------------------------------------- Using the **COUNT** option it is possible to reduce the number of entries reported. This is a very important feature even if it may look marginal, because it allows, for instance, to model operations such as *give me the entry greater or equal to the following*: ``` > XRANGE somestream 1526985054069-0 + COUNT 1 1) 1) 1526985054069-0 2) 1) "duration" 2) "72" 3) "event-id" 4) "9" 5) "user-id" 6) "839248" ``` In the above case the entry `1526985054069-0` exists, otherwise the server would have sent us the next one. Using `COUNT` is also the base in order to use `XRANGE` as an iterator. Iterating a stream ------------------ In order to iterate a stream, we can proceed as follows. Let's assume that we want two elements per iteration. We start fetching the first two elements, which is trivial: ``` > XRANGE writers - + COUNT 2 1) 1) 1526985676425-0 2) 1) "name" 2) "Virginia" 3) "surname" 4) "Woolf" 2) 1) 1526985685298-0 2) 1) "name" 2) "Jane" 3) "surname" 4) "Austen" ``` Then instead of starting the iteration again from `-`, as the start of the range we use the entry ID of the *last* entry returned by the previous `XRANGE` call as an exclusive interval. The ID of the last entry is `1526985685298-0`, so we just prefix it with a '(', and continue our iteration: ``` > XRANGE writers (1526985685298-0 + COUNT 2 1) 1) 1526985691746-0 2) 1) "name" 2) "Toni" 3) "surname" 4) "Morrison" 2) 1) 1526985712947-0 2) 1) "name" 2) "Agatha" 3) "surname" 4) "Christie" ``` And so forth. Eventually this will allow to visit all the entries in the stream. Obviously, we can start the iteration from any ID, or even from a specific time, by providing a given incomplete start ID. Moreover, we can limit the iteration to a given ID or time, by providing an end ID or incomplete ID instead of `+`. The command [`XREAD`](../xread) is also able to iterate the stream. The command [`XREVRANGE`](../xrevrange) can iterate the stream reverse, from higher IDs (or times) to lower IDs (or times). ### Iterating with earlier versions of Redis While exclusive range intervals are only available from Redis 6.2, it is still possible to use a similar stream iteration pattern with earlier versions. You start fetching from the stream the same way as described above to obtain the first entries. For the subsequent calls, you'll need to programmatically advance the last entry's ID returned. Most Redis client should abstract this detail, but the implementation can also be in the application if needed. In the example above, this means incrementing the sequence of `1526985685298-0` by one, from 0 to 1. The second call would, therefore, be: ``` > XRANGE writers 1526985685298-1 + COUNT 2 1) 1) 1526985691746-0 2) 1) "name" 2) "Toni" ... ``` Also, note that once the sequence part of the last ID equals 18446744073709551615, you'll need to increment the timestamp and reset the sequence part to 0. For example, incrementing the ID `1526985685298-18446744073709551615` should result in `1526985685299-0`. A symmetrical pattern applies to iterating the stream with [`XREVRANGE`](../xrevrange). The only difference is that the client needs to decrement the ID for the subsequent calls. When decrementing an ID with a sequence part of 0, the timestamp needs to be decremented by 1 and the sequence set to 18446744073709551615. Fetching single items --------------------- If you look for an `XGET` command you'll be disappointed because `XRANGE` is effectively the way to go in order to fetch a single entry from a stream. All you have to do is to specify the ID two times in the arguments of XRANGE: ``` > XRANGE mystream 1526984818136-0 1526984818136-0 1) 1) 1526984818136-0 2) 1) "duration" 2) "1532" 3) "event-id" 4) "5" 5) "user-id" 6) "7782813" ``` Additional information about streams ------------------------------------ For further information about Redis streams please check our [introduction to Redis Streams document](https://redis.io/topics/streams-intro). Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays), specifically: The command returns the entries with IDs matching the specified range. The returned entries are complete, that means that the ID and all the fields they are composed are returned. Moreover, the entries are returned with their fields and values in the exact same order as [`XADD`](../xadd) added them. Examples -------- ``` XADD writers * name Virginia surname Woolf XADD writers * name Jane surname Austen XADD writers * name Toni surname Morrison XADD writers * name Agatha surname Christie XADD writers * name Ngozi surname Adichie XLEN writers XRANGE writers - + COUNT 2 ``` History ------- * Starting with Redis version 6.2.0: Added exclusive ranges.
programming_docs
redis CF.ADDNX CF.ADDNX ======== ``` CF.ADDNX ``` Syntax ``` CF.ADDNX key item ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(k + i), where k is the number of sub-filters and i is maxIterations Adds an item to a cuckoo filter if the item did not exist previously. See documentation on [`CF.ADD`](../cf.add) for more information on this command. This command is equivalent to a [`CF.EXISTS`](../cf.exists) + [`CF.ADD`](../cf.add) command. It does not insert an element into the filter if its fingerprint already exists in order to use the available capacity more efficiently. However, deleting elements can introduce **false negative** error rate! Note that this command is slower than [`CF.ADD`](../cf.add) because it first checks whether the item exists. ### Parameters * **key**: The name of the filter * **item**: The item to add Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - where "1" means the item has been added to the filter, and "0" mean, the item already existed. Examples -------- ``` redis> CF.ADDNX cf item1 (integer) 0 redis> CF.ADDNX cf item_new (integer) 1 ``` redis GETRANGE GETRANGE ======== ``` GETRANGE ``` Syntax ``` GETRANGE key start end ``` Available since: 2.4.0 Time complexity: O(N) where N is the length of the returned string. The complexity is ultimately determined by the returned length, but because creating a substring from an existing string is very cheap, it can be considered O(1) for small strings. ACL categories: `@read`, `@string`, `@slow`, Returns the substring of the string value stored at `key`, determined by the offsets `start` and `end` (both are inclusive). Negative offsets can be used in order to provide an offset starting from the end of the string. So -1 means the last character, -2 the penultimate and so forth. The function handles out of range requests by limiting the resulting range to the actual length of the string. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings) Examples -------- ``` SET mykey "This is a string" GETRANGE mykey 0 3 GETRANGE mykey -3 -1 GETRANGE mykey 0 -1 GETRANGE mykey 10 100 ``` redis JSON.ARRPOP JSON.ARRPOP =========== ``` JSON.ARRPOP ``` Syntax ``` JSON.ARRPOP key [path [index]] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: O(N) when path is evaluated to a single value where N is the size of the array and the specified index is not the last element, O(1) when path is evaluated to a single value and the specified index is the last element, or O(N) when path is evaluated to multiple values, where N is the size of the key Remove and return an element from the index in the array [Examples](#examples) Required arguments ------------------ `key` is key to modify. `index` is position in the array to start popping from. Default is `-1`, meaning the last element. Out-of-range indexes round to their respective array ends. Popping an empty array returns null. Optional arguments ------------------ `path` is JSONPath to specify. Default is root `$`. Return ------ `JSON.ARRPOP` returns an [array](https://redis.io/docs/reference/protocol-spec/#resp-arrays) of bulk string replies for each path, each reply is the popped JSON value, or `nil`, if the matching JSON value is not an array. For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Examples -------- **Pop a value from an index and insert a new value** Create two headphone products with maximum sound levels. ``` 127.0.0.1:6379> JSON.SET key $ '[{"name":"Healthy headphones","description":"Wireless Bluetooth headphones with noise-cancelling technology","connection":{"wireless":true,"type":"Bluetooth"},"price":99.98,"stock":25,"colors":["black","silver"],"max\_level":[60,70,80]},{"name":"Noisy headphones","description":"Wireless Bluetooth headphones with noise-cancelling technology","connection":{"wireless":true,"type":"Bluetooth"},"price":99.98,"stock":25,"colors":["black","silver"],"max\_level":[80,90,100,120]}]' OK ``` Get all maximum values for the second product. ``` 127.0.0.1:6379> JSON.GET key $.[1].max\_level "[[80,90,100,120]]" ``` Update the `max_level` field of the product: remove an unavailable value and add a newly available value. ``` 127.0.0.1:6379> JSON.ARRPOP key $.[1].max\_level 0 1) "80" ``` Get the updated array. ``` 127.0.0.1:6379> JSON.GET key $.[1].max\_level "[[90,100,120]]" ``` Now insert a new lowest value. ``` 127.0.0.1:6379> JSON.ARRINSERT key $.[1].max\_level 0 85 1) (integer) 4 ``` Get the updated array. ``` 127.0.0.1:6379> JSON.GET key $.[1].max\_level "[[85,90,100,120]]" ``` See also -------- [`JSON.ARRAPPEND`](../json.arrappend) | [`JSON.ARRINDEX`](../json.arrindex) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json) redis GEORADIUS GEORADIUS ========= ``` GEORADIUS (deprecated) ``` As of Redis version 6.2.0, this command is regarded as deprecated. It can be replaced by [`GEOSEARCH`](../geosearch) and [`GEOSEARCHSTORE`](../geosearchstore) with the `BYRADIUS` argument when migrating or writing new code. Syntax ``` GEORADIUS key longitude latitude radius <M | KM | FT | MI> [WITHCOORD] [WITHDIST] [WITHHASH] [COUNT count [ANY]] [ASC | DESC] [STORE key] [STOREDIST key] ``` Available since: 3.2.0 Time complexity: O(N+log(M)) where N is the number of elements inside the bounding box of the circular area delimited by center and radius and M is the number of items inside the index. ACL categories: `@write`, `@geo`, `@slow`, Return the members of a sorted set populated with geospatial information using [`GEOADD`](../geoadd), which are within the borders of the area specified with the center location and the maximum distance from the center (the radius). This manual page also covers the [`GEORADIUS_RO`](../georadius_ro) and [`GEORADIUSBYMEMBER_RO`](../georadiusbymember_ro) variants (see the section below for more information). The common use case for this command is to retrieve geospatial items near a specified point not farther than a given amount of meters (or other units). This allows, for example, to suggest mobile users of an application nearby places. The radius is specified in one of the following units: * **m** for meters. * **km** for kilometers. * **mi** for miles. * **ft** for feet. The command optionally returns additional information using the following options: * `WITHDIST`: Also return the distance of the returned items from the specified center. The distance is returned in the same unit as the unit specified as the radius argument of the command. * `WITHCOORD`: Also return the longitude,latitude coordinates of the matching items. * `WITHHASH`: Also return the raw geohash-encoded sorted set score of the item, in the form of a 52 bit unsigned integer. This is only useful for low level hacks or debugging and is otherwise of little interest for the general user. The command default is to return unsorted items. Two different sorting methods can be invoked using the following two options: * `ASC`: Sort returned items from the nearest to the farthest, relative to the center. * `DESC`: Sort returned items from the farthest to the nearest, relative to the center. By default all the matching items are returned. It is possible to limit the results to the first N matching items by using the **COUNT `<count>`** option. When `ANY` is provided the command will return as soon as enough matches are found, so the results may not be the ones closest to the specified point, but on the other hand, the effort invested by the server is significantly lower. When `ANY` is not provided, the command will perform an effort that is proportional to the number of items matching the specified area and sort them, so to query very large areas with a very small `COUNT` option may be slow even if just a few results are returned. By default the command returns the items to the client. It is possible to store the results with one of these options: * `STORE`: Store the items in a sorted set populated with their geospatial information. * `STOREDIST`: Store the items in a sorted set populated with their distance from the center as a floating point number, in the same unit specified in the radius. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays), specifically: * Without any `WITH` option specified, the command just returns a linear array like ["New York","Milan","Paris"]. * If `WITHCOORD`, `WITHDIST` or `WITHHASH` options are specified, the command returns an array of arrays, where each sub-array represents a single item. When additional information is returned as an array of arrays for each item, the first item in the sub-array is always the name of the returned item. The other information is returned in the following order as successive elements of the sub-array. 1. The distance from the center as a floating point number, in the same unit specified in the radius. 2. The geohash integer. 3. The coordinates as a two items x,y array (longitude,latitude). So for example the command `GEORADIUS Sicily 15 37 200 km WITHCOORD WITHDIST` will return each item in the following way: ``` ["Palermo","190.4424",["13.361389338970184","38.115556395496299"]] ``` Read-only variants ------------------ Since `GEORADIUS` and [`GEORADIUSBYMEMBER`](../georadiusbymember) have a `STORE` and `STOREDIST` option they are technically flagged as writing commands in the Redis command table. For this reason read-only replicas will flag them, and Redis Cluster replicas will redirect them to the master instance even if the connection is in read-only mode (see the [`READONLY`](../readonly) command of Redis Cluster). Breaking the compatibility with the past was considered but rejected, at least for Redis 4.0, so instead two read-only variants of the commands were added. They are exactly like the original commands but refuse the `STORE` and `STOREDIST` options. The two variants are called [`GEORADIUS_RO`](../georadius_ro) and [`GEORADIUSBYMEMBER_RO`](../georadiusbymember_ro), and can safely be used in replicas. Examples -------- ``` GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" GEORADIUS Sicily 15 37 200 km WITHDIST GEORADIUS Sicily 15 37 200 km WITHCOORD GEORADIUS Sicily 15 37 200 km WITHDIST WITHCOORD ``` History ------- * Starting with Redis version 6.2.0: Added the `ANY` option for `COUNT`. * Starting with Redis version 7.0.0: Added support for uppercase unit names. redis FT.CURSOR FT.CURSOR ========= ``` FT.CURSOR READ ``` Syntax ``` FT.CURSOR READ index cursor_id [COUNT read_size] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.1.0](https://redis.io/docs/stack/search) Time complexity: O(1) Read next results from an existing cursor [Examples](#examples) See [Cursor API](https://redis.io/docs/stack/search/reference/aggregations/#cursor-api) for more details. Required arguments ------------------ `index` is index name. `cursor_id` is id of the cursor. `[COUNT read_size]` is number of results to read. This parameter overrides `COUNT` specified in [`FT.AGGREGATE`](../ft.aggregate). Return ------ FT.CURSOR DEL returns an array reply where each row is an array reply and represents a single aggregate result. Examples -------- **Read next results from a cursor** ``` 127.0.0.1:6379> FT.CURSOR READ idx 342459320 COUNT 50 ``` See also -------- [`FT.CURSOR DEL`](../ft.cursor-del) | [`FT.AGGREGATE`](../ft.aggregate) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) redis LASTSAVE LASTSAVE ======== ``` LASTSAVE ``` Syntax ``` LASTSAVE ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@admin`, `@fast`, `@dangerous`, Return the UNIX TIME of the last DB save executed with success. A client may check if a [`BGSAVE`](../bgsave) command succeeded reading the `LASTSAVE` value, then issuing a [`BGSAVE`](../bgsave) command and checking at regular intervals every N seconds if `LASTSAVE` changed. Redis considers the database saved successfully at startup. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): an UNIX time stamp. redis CONFIG CONFIG ====== ``` CONFIG RESETSTAT ``` Syntax ``` CONFIG RESETSTAT ``` Available since: 2.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, Resets the statistics reported by Redis using the [`INFO`](../info) and [`LATENCY HISTOGRAM`](../latency-histogram) commands. The following is a non-exhaustive list of values that are reset: * Keyspace hits and misses * Number of expired keys * Command and error statistics * Connections received, rejected and evicted * Persistence statistics * Active defragmentation statistics Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): always `OK`. redis FUNCTION FUNCTION ======== ``` FUNCTION LIST ``` Syntax ``` FUNCTION LIST [LIBRARYNAME library-name-pattern] [WITHCODE] ``` Available since: 7.0.0 Time complexity: O(N) where N is the number of functions ACL categories: `@slow`, `@scripting`, Return information about the functions and libraries. You can use the optional `LIBRARYNAME` argument to specify a pattern for matching library names. The optional `WITHCODE` modifier will cause the server to include the libraries source implementation in the reply. The following information is provided for each of the libraries in the response: * **library\_name:** the name of the library. * **engine:** the engine of the library. * **functions:** the list of functions in the library. Each function has the following fields: + **name:** the name of the function. + **description:** the function's description. + **flags:** an array of [function flags](https://redis.io/docs/manual/programmability/functions-intro/#function-flags). * **library\_code:** the library's source code (when given the `WITHCODE` modifier). For more information please refer to [Introduction to Redis Functions](https://redis.io/topics/functions-intro). Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) redis TS.MREVRANGE TS.MREVRANGE ============ ``` TS.MREVRANGE ``` Syntax ``` TS.MREVRANGE fromTimestamp toTimestamp [LATEST] [FILTER_BY_TS TS...] [FILTER_BY_VALUE min max] [WITHLABELS | SELECTED_LABELS label...] [COUNT count] [[ALIGN align] AGGREGATION aggregator bucketDuration [BUCKETTIMESTAMP bt] [EMPTY]] FILTER filterExpr... [GROUPBY label REDUCE reducer] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [TimeSeries 1.4.0](https://redis.io/docs/stack/timeseries) Time complexity: O(n/m+k) where n = Number of data points, m = Chunk size (data points per chunk), k = Number of data points that are in the requested ranges Query a range across multiple time series by filters in reverse direction [Examples](#examples) Required arguments ------------------ `fromTimestamp` is start timestamp for the range query (integer UNIX timestamp in milliseconds) or `-` to denote the timestamp of the earliest sample amongs all time series that passes `FILTER filterExpr...`. `toTimestamp` is end timestamp for the range query (integer UNIX timestamp in milliseconds) or `+` to denote the timestamp of the latest sample amongs all time series that passes `FILTER filterExpr...`. `FILTER filterExpr...` filters time series based on their labels and label values. Each filter expression has one of the following syntaxes: * `label=value`, where `label` equals `value` * `label!=value`, where `label` does not equal `value` * `label=`, where `key` does not have label `label` * `label!=`, where `key` has label `label` * `label=(value1,value2,...)`, where `key` with label `label` equals one of the values in the list * `label!=(value1,value2,...)`, where key with label `label` does not equal any of the values in the list **Notes:** * At least one `label=value` filter is required. * Filters are conjunctive. For example, the FILTER `type=temperature room=study` means the a time series is a temperature time series of a study room. * Don't use whitespaces in the filter expression. Optional arguments ------------------ `LATEST` (since RedisTimeSeries v1.8) is used when a time series is a compaction. With `LATEST`, TS.MREVRANGE also reports the compacted value of the latest possibly partial bucket, given that this bucket's start time falls within `[fromTimestamp, toTimestamp]`. Without `LATEST`, TS.MREVRANGE does not report the latest possibly partial bucket. When a time series is not a compaction, `LATEST` is ignored. The data in the latest bucket of a compaction is possibly partial. A bucket is *closed* and compacted only upon arrival of a new sample that *opens* a new *latest* bucket. There are cases, however, when the compacted value of the latest possibly partial bucket is also required. In such a case, use `LATEST`. `FILTER_BY_TS ts...` (since RedisTimeSeries v1.6) filters samples by a list of specific timestamps. A sample passes the filter if its exact timestamp is specified and falls within `[fromTimestamp, toTimestamp]`. `FILTER_BY_VALUE min max` (since RedisTimeSeries v1.6) filters samples by minimum and maximum values. `WITHLABELS` includes in the reply all label-value pairs representing metadata labels of the time series. If `WITHLABELS` or `SELECTED_LABELS` are not specified, by default, an empty list is reported as label-value pairs. `SELECTED_LABELS label...` (since RedisTimeSeries v1.6) returns a subset of the label-value pairs that represent metadata labels of the time series. Use when a large number of labels exists per series, but only the values of some of the labels are required. If `WITHLABELS` or `SELECTED_LABELS` are not specified, by default, an empty list is reported as label-value pairs. `COUNT count` limits the number of returned samples. `ALIGN align` (since RedisTimeSeries v1.6) is a time bucket alignment control for `AGGREGATION`. It controls the time bucket timestamps by changing the reference timestamp on which a bucket is defined. Values include: * `start` or `-`: The reference timestamp will be the query start interval time (`fromTimestamp`) which can't be `-` * `end` or `+`: The reference timestamp will be the query end interval time (`toTimestamp`) which can't be `+` * A specific timestamp: align the reference timestamp to a specific time **Note:** When not provided, alignment is set to `0`. `AGGREGATION aggregator bucketDuration` per time series, aggregates samples into time buckets, where: * `aggregator` takes one of the following aggregation types: | `aggregator` | Description | | --- | --- | | `avg` | Arithmetic mean of all values | | `sum` | Sum of all values | | `min` | Minimum value | | `max` | Maximum value | | `range` | Difference between maximum value and minimum value | | `count` | Number of values | | `first` | Value with lowest timestamp in the bucket | | `last` | Value with highest timestamp in the bucket | | `std.p` | Population standard deviation of the values | | `std.s` | Sample standard deviation of the values | | `var.p` | Population variance of the values | | `var.s` | Sample variance of the values | | `twa` | Time-weighted average over the bucket's timeframe (since RedisTimeSeries v1.8) | * `bucketDuration` is duration of each bucket, in milliseconds. Without `ALIGN`, bucket start times are multiples of `bucketDuration`. With `ALIGN align`, bucket start times are multiples of `bucketDuration` with remainder `align % bucketDuration`. The first bucket start time is less than or equal to `fromTimestamp`. `[BUCKETTIMESTAMP bt]` (since RedisTimeSeries v1.8) controls how bucket timestamps are reported. | `bt` | Timestamp reported for each bucket | | --- | --- | | `-` or `low` | the bucket's start time (default) | | `+` or `high` | the bucket's end time | | `~` or `mid` | the bucket's mid time (rounded down if not an integer) | `[EMPTY]` (since RedisTimeSeries v1.8) is a flag, which, when specified, reports aggregations also for empty buckets. | `aggregator` | Value reported for each empty bucket | | --- | --- | | `sum`, `count` | `0` | | `last` | The value of the last sample before the bucket's start. `NaN` when no such sample. | | `twa` | Average value over the bucket's timeframe based on linear interpolation of the last sample before the bucket's start and the first sample after the bucket's end. `NaN` when no such samples. | | `min`, `max`, `range`, `avg`, `first`, `std.p`, `std.s` | `NaN` | Regardless of the values of `fromTimestamp` and `toTimestamp`, no data is reported for buckets that end before the earliest sample or begin after the latest sample in the time series. `GROUPBY label REDUCE reducer` (since RedisTimeSeries v1.6) splits time series into groups, each group contains time series that share the same value for the provided label name, then aggregates results in each group. When combined with `AGGREGATION` the `GROUPBY`/`REDUCE` is applied post aggregation stage. * `label` is label name. A group is created for all time series that share the same value for this label. * `reducer` is an aggregation type used to aggregate the results in each group. | `reducer` | Description | | --- | --- | | `avg` | Arithmetic mean of all non-NaN values (since RedisTimeSeries v1.8) | | `sum` | Sum of all non-NaN values | | `min` | Minimum non-NaN value | | `max` | Maximum non-NaN value | | `range` | Difference between maximum non-NaN value and minimum non-NaN value (since RedisTimeSeries v1.8) | | `count` | Number of non-NaN values (since RedisTimeSeries v1.8) | | `std.p` | Population standard deviation of all non-NaN values (since RedisTimeSeries v1.8) | | `std.s` | Sample standard deviation of all non-NaN values (since RedisTimeSeries v1.8) | | `var.p` | Population variance of all non-NaN values (since RedisTimeSeries v1.8) | | `var.s` | Sample variance of all non-NaN values (since RedisTimeSeries v1.8) | **Notes:** * The produced time series is named `<label>=<value>` * The produced time series contains two labels with these label array structures: + `__reducer__`, the reducer used (e.g., `"count"`) + `__source__`, the list of time series keys used to compute the grouped series (e.g., `"key1,key2,key3"`) **Note:** An `MREVRANGE` command cannot be part of a transaction when running on a Redis cluster. Return value ------------ If `GROUPBY label REDUCE reducer` is not specified: * [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): for each time series matching the specified filters, the following is reported: + bulk-string-reply: The time series key name + [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): label-value pairs ([Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings), [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings)) - By default, an empty list is reported - If `WITHLABELS` is specified, all labels associated with this time series are reported - If `SELECTED_LABELS label...` is specified, the selected labels are reported + [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): timestamp-value pairs ([Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) (double)): all samples/aggregations matching the range If `GROUPBY label REDUCE reducer` is specified: * [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): for each group of time series matching the specified filters, the following is reported: + bulk-string-reply with the format `label=value` where `label` is the `GROUPBY` label argument + [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a single pair ([Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings), [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings)): the `GROUPBY` label argument and value + [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a single pair ([Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings), [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings)): the string `__reducer__` and the reducer argument + [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a single pair ([Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings), [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings)): the string `__source__` and the time series key names separated by `,` + [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): timestamp-value pairs ([Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) (double)): all samples/aggregations matching the range Examples -------- **Retrieve maximum stock price per timestamp** Create two stocks and add their prices at three different timestamps. ``` 127.0.0.1:6379> TS.CREATE stock:A LABELS type stock name A OK 127.0.0.1:6379> TS.CREATE stock:B LABELS type stock name B OK 127.0.0.1:6379> TS.MADD stock:A 1000 100 stock:A 1010 110 stock:A 1020 120 1) (integer) 1000 2) (integer) 1010 3) (integer) 1020 127.0.0.1:6379> TS.MADD stock:B 1000 120 stock:B 1010 110 stock:B 1020 100 1) (integer) 1000 2) (integer) 1010 3) (integer) 1020 ``` You can now retrieve the maximum stock price per timestamp. ``` 127.0.0.1:6379> TS.MREVRANGE - + WITHLABELS FILTER type=stock GROUPBY type REDUCE max 1) 1) "type=stock" 2) 1) 1) "type" 2) "stock" 2) 1) "\_\_reducer\_\_" 2) "max" 3) 1) "\_\_source\_\_" 2) "stock:A,stock:B" 3) 1) 1) (integer) 1020 2) 120 2) 1) (integer) 1010 2) 110 3) 1) (integer) 1000 2) 120 ``` The `FILTER type=stock` clause returns a single time series representing stock prices. The `GROUPBY type REDUCE max` clause splits the time series into groups with identical type values, and then, for each timestamp, aggregates all series that share the same type value using the max aggregator. **Calculate average stock price and retrieve maximum average** Create two stocks and add their prices at nine different timestamps. ``` 127.0.0.1:6379> TS.CREATE stock:A LABELS type stock name A OK 127.0.0.1:6379> TS.CREATE stock:B LABELS type stock name B OK 127.0.0.1:6379> TS.MADD stock:A 1000 100 stock:A 1010 110 stock:A 1020 120 1) (integer) 1000 2) (integer) 1010 3) (integer) 1020 127.0.0.1:6379> TS.MADD stock:B 1000 120 stock:B 1010 110 stock:B 1020 100 1) (integer) 1000 2) (integer) 1010 3) (integer) 1020 127.0.0.1:6379> TS.MADD stock:A 2000 200 stock:A 2010 210 stock:A 2020 220 1) (integer) 2000 2) (integer) 2010 3) (integer) 2020 127.0.0.1:6379> TS.MADD stock:B 2000 220 stock:B 2010 210 stock:B 2020 200 1) (integer) 2000 2) (integer) 2010 3) (integer) 2020 127.0.0.1:6379> TS.MADD stock:A 3000 300 stock:A 3010 310 stock:A 3020 320 1) (integer) 3000 2) (integer) 3010 3) (integer) 3020 127.0.0.1:6379> TS.MADD stock:B 3000 320 stock:B 3010 310 stock:B 3020 300 1) (integer) 3000 2) (integer) 3010 3) (integer) 3020 ``` Now, for each stock, calculate the average stock price per a 1000-millisecond timeframe, and then retrieve the stock with the maximum average for that timeframe in reverse direction. ``` 127.0.0.1:6379> TS.MREVRANGE - + WITHLABELS AGGREGATION avg 1000 FILTER type=stock GROUPBY type REDUCE max 1) 1) "type=stock" 2) 1) 1) "type" 2) "stock" 2) 1) "\_\_reducer\_\_" 2) "max" 3) 1) "\_\_source\_\_" 2) "stock:A,stock:B" 3) 1) 1) (integer) 3000 2) 310 2) 1) (integer) 2000 2) 210 3) 1) (integer) 1000 2) 110 ``` **Group query results** Query all time series with the metric label equal to `cpu`, then group the time series by the value of their `metric_name` label value and for each group return the maximum value and the time series keys (*source*) with that value. ``` 127.0.0.1:6379> TS.ADD ts1 1548149180000 90 labels metric cpu metric\_name system (integer) 1548149180000 127.0.0.1:6379> TS.ADD ts1 1548149185000 45 (integer) 1548149185000 127.0.0.1:6379> TS.ADD ts2 1548149180000 99 labels metric cpu metric\_name user (integer) 1548149180000 127.0.0.1:6379> TS.MREVRANGE - + WITHLABELS FILTER metric=cpu GROUPBY metric\_name REDUCE max 1) 1) "metric\_name=system" 2) 1) 1) "metric\_name" 2) "system" 2) 1) "\_\_reducer\_\_" 2) "max" 3) 1) "\_\_source\_\_" 2) "ts1" 3) 1) 1) (integer) 1548149185000 2) 45 2) 1) (integer) 1548149180000 2) 90 2) 1) "metric\_name=user" 2) 1) 1) "metric\_name" 2) "user" 2) 1) "\_\_reducer\_\_" 2) "max" 3) 1) "\_\_source\_\_" 2) "ts2" 3) 1) 1) (integer) 1548149180000 2) 99 ``` **Filter query by value** Query all time series with the metric label equal to `cpu`, then filter values larger or equal to 90.0 and smaller or equal to 100.0. ``` 127.0.0.1:6379> TS.ADD ts1 1548149180000 90 labels metric cpu metric\_name system (integer) 1548149180000 127.0.0.1:6379> TS.ADD ts1 1548149185000 45 (integer) 1548149185000 127.0.0.1:6379> TS.ADD ts2 1548149180000 99 labels metric cpu metric\_name user (integer) 1548149180000 127.0.0.1:6379> TS.MREVRANGE - + FILTER\_BY\_VALUE 90 100 WITHLABELS FILTER metric=cpu 1) 1) "ts1" 2) 1) 1) "metric" 2) "cpu" 2) 1) "metric\_name" 2) "system" 3) 1) 1) (integer) 1548149180000 2) 90 2) 1) "ts2" 2) 1) 1) "metric" 2) "cpu" 2) 1) "metric\_name" 2) "user" 3) 1) 1) (integer) 1548149180000 2) 99 ``` **Query using a label** Query all time series with the metric label equal to `cpu`, but only return the team label. ``` 127.0.0.1:6379> TS.ADD ts1 1548149180000 90 labels metric cpu metric\_name system team NY (integer) 1548149180000 127.0.0.1:6379> TS.ADD ts1 1548149185000 45 (integer) 1548149185000 127.0.0.1:6379> TS.ADD ts2 1548149180000 99 labels metric cpu metric\_name user team SF (integer) 1548149180000 127.0.0.1:6379> TS.MREVRANGE - + SELECTED\_LABELS team FILTER metric=cpu 1) 1) "ts1" 2) 1) 1) "team" 2) (nil) 3) 1) 1) (integer) 1548149185000 2) 45 2) 1) (integer) 1548149180000 2) 90 2) 1) "ts2" 2) 1) 1) "team" 2) (nil) 3) 1) 1) (integer) 1548149180000 2) 99 ``` See also -------- [`TS.MRANGE`](../ts.mrange) | [`TS.RANGE`](../ts.range) | [`TS.REVRANGE`](../ts.revrange) Related topics -------------- [RedisTimeSeries](https://redis.io/docs/stack/timeseries)
programming_docs
redis RPUSH RPUSH ===== ``` RPUSH ``` Syntax ``` RPUSH key element [element ...] ``` Available since: 1.0.0 Time complexity: O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments. ACL categories: `@write`, `@list`, `@fast`, Insert all the specified values at the tail of the list stored at `key`. If `key` does not exist, it is created as empty list before performing the push operation. When `key` holds a value that is not a list, an error is returned. It is possible to push multiple elements using a single command call just specifying multiple arguments at the end of the command. Elements are inserted one after the other to the tail of the list, from the leftmost element to the rightmost element. So for instance the command `RPUSH mylist a b c` will result into a list containing `a` as first element, `b` as second element and `c` as third element. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the length of the list after the push operation. Examples -------- ``` RPUSH mylist "hello" RPUSH mylist "world" LRANGE mylist 0 -1 ``` History ------- * Starting with Redis version 2.4.0: Accepts multiple `element` arguments. redis HINCRBY HINCRBY ======= ``` HINCRBY ``` Syntax ``` HINCRBY key field increment ``` Available since: 2.0.0 Time complexity: O(1) ACL categories: `@write`, `@hash`, `@fast`, Increments the number stored at `field` in the hash stored at `key` by `increment`. If `key` does not exist, a new key holding a hash is created. If `field` does not exist the value is set to `0` before the operation is performed. The range of values supported by `HINCRBY` is limited to 64 bit signed integers. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the value at `field` after the increment operation. Examples -------- Since the `increment` argument is signed, both increment and decrement operations can be performed: ``` HSET myhash field 5 HINCRBY myhash field 1 HINCRBY myhash field -1 HINCRBY myhash field -10 ``` redis TS.QUERYINDEX TS.QUERYINDEX ============= ``` TS.QUERYINDEX ``` Syntax ``` TS.QUERYINDEX filterExpr... ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [TimeSeries 1.0.0](https://redis.io/docs/stack/timeseries) Time complexity: O(n) where n is the number of time-series that match the filters Get all time series keys matching a filter list [Examples](#examples) Required arguments ------------------ `filterExpr...` filters time series based on their labels and label values. Each filter expression has one of the following syntaxes: * `label=value`, where `label` equals `value` * `label!=value`, where `label` does not equal `value` * `label=`, where `key` does not have label `label` * `label!=`, where `key` has label `label` * `label=(value1,value2,...)`, where `key` with label `label` equals one of the values in the list * `label!=(value1,value2,...)`, where key with label `label` does not equal any of the values in the list **Notes:** * At least one `label=value` filter is required. * Filters are conjunctive. For example, the FILTER `type=temperature room=study` means the a time series is a temperature time series of a study room. * Don't use whitespaces in the filter expression. **Note:** The `QUERYINDEX` command cannot be part of transaction when running on a Redis cluster. Return value ------------ Either * [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) where each element is a [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): a time series key. The array is empty if no time series matches the filter. * [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) (e.g., on invalid filter expression) Examples -------- **Find keys by location and sensor type** Create a set of sensors to measure temperature and humidity in your study and kitchen. ``` 127.0.0.1:6379> TS.CREATE telemetry:study:temperature LABELS room study type temperature OK 127.0.0.1:6379> TS.CREATE telemetry:study:humidity LABELS room study type humidity OK 127.0.0.1:6379> TS.CREATE telemetry:kitchen:temperature LABELS room kitchen type temperature OK 127.0.0.1:6379> TS.CREATE telemetry:kitchen:humidity LABELS room kitchen type humidity OK ``` Retrieve keys of all time series representing sensors located in the kitchen. ``` 127.0.0.1:6379> TS.QUERYINDEX room=kitchen 1) "telemetry:kitchen:humidity" 2) "telemetry:kitchen:temperature" ``` To retrieve the keys of all time series representing sensors that measure temperature, use this query: ``` 127.0.0.1:6379> TS.QUERYINDEX type=temperature 1) "telemetry:kitchen:temperature" 2) "telemetry:study:temperature" ``` See also -------- [`TS.CREATE`](../ts.create) | [`TS.MRANGE`](../ts.mrange) | [`TS.MREVRANGE`](../ts.mrevrange) | [`TS.MGET`](../ts.mget) Related topics -------------- [RedisTimeSeries](https://redis.io/docs/stack/timeseries) redis DISCARD DISCARD ======= ``` DISCARD ``` Syntax ``` DISCARD ``` Available since: 2.0.0 Time complexity: O(N), when N is the number of queued commands ACL categories: `@fast`, `@transaction`, Flushes all previously queued commands in a [transaction](https://redis.io/topics/transactions) and restores the connection state to normal. If [`WATCH`](../watch) was used, `DISCARD` unwatches all keys watched by the connection. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): always `OK`. redis LINSERT LINSERT ======= ``` LINSERT ``` Syntax ``` LINSERT key <BEFORE | AFTER> pivot element ``` Available since: 2.2.0 Time complexity: O(N) where N is the number of elements to traverse before seeing the value pivot. This means that inserting somewhere on the left end on the list (head) can be considered O(1) and inserting somewhere on the right end (tail) is O(N). ACL categories: `@write`, `@list`, `@slow`, Inserts `element` in the list stored at `key` either before or after the reference value `pivot`. When `key` does not exist, it is considered an empty list and no operation is performed. An error is returned when `key` exists but does not hold a list value. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the list length after a successful insert operation, `0` if the `key` doesn't exist, and `-1` when the `pivot` wasn't found. Examples -------- ``` RPUSH mylist "Hello" RPUSH mylist "World" LINSERT mylist BEFORE "World" "There" LRANGE mylist 0 -1 ``` redis ACL ACL === ``` ACL CAT ``` Syntax ``` ACL CAT [category] ``` Available since: 6.0.0 Time complexity: O(1) since the categories and commands are a fixed set. ACL categories: `@slow`, The command shows the available ACL categories if called without arguments. If a category name is given, the command shows all the Redis commands in the specified category. ACL categories are very useful in order to create ACL rules that include or exclude a large set of commands at once, without specifying every single command. For instance, the following rule will let the user `karin` perform everything but the most dangerous operations that may affect the server stability: ``` ACL SETUSER karin on +@all -@dangerous ``` We first add all the commands to the set of commands that `karin` is able to execute, but then we remove all the dangerous commands. Checking for all the available categories is as simple as: ``` > ACL CAT 1) "keyspace" 2) "read" 3) "write" 4) "set" 5) "sortedset" 6) "list" 7) "hash" 8) "string" 9) "bitmap" 10) "hyperloglog" 11) "geo" 12) "stream" 13) "pubsub" 14) "admin" 15) "fast" 16) "slow" 17) "blocking" 18) "dangerous" 19) "connection" 20) "transaction" 21) "scripting" ``` Then we may want to know what commands are part of a given category: ``` > ACL CAT dangerous 1) "flushdb" 2) "acl" 3) "slowlog" 4) "debug" 5) "role" 6) "keys" 7) "pfselftest" 8) "client" 9) "bgrewriteaof" 10) "replicaof" 11) "monitor" 12) "restore-asking" 13) "latency" 14) "replconf" 15) "pfdebug" 16) "bgsave" 17) "sync" 18) "config" 19) "flushall" 20) "cluster" 21) "info" 22) "lastsave" 23) "slaveof" 24) "swapdb" 25) "module" 26) "restore" 27) "migrate" 28) "save" 29) "shutdown" 30) "psync" 31) "sort" ``` Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a list of ACL categories or a list of commands inside a given category. The command may return an error if an invalid category name is given as argument. redis TOUCH TOUCH ===== ``` TOUCH ``` Syntax ``` TOUCH key [key ...] ``` Available since: 3.2.1 Time complexity: O(N) where N is the number of keys that will be touched. ACL categories: `@keyspace`, `@read`, `@fast`, Alters the last access time of a key(s). A key is ignored if it does not exist. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): The number of keys that were touched. Examples -------- ``` SET key1 "Hello" SET key2 "World" TOUCH key1 key2 ``` redis LCS LCS === ``` LCS ``` Syntax ``` LCS key1 key2 [LEN] [IDX] [MINMATCHLEN min-match-len] [WITHMATCHLEN] ``` Available since: 7.0.0 Time complexity: O(N\*M) where N and M are the lengths of s1 and s2, respectively ACL categories: `@read`, `@string`, `@slow`, The LCS command implements the longest common subsequence algorithm. Note that this is different than the longest common string algorithm, since matching characters in the string does not need to be contiguous. For instance the LCS between "foo" and "fao" is "fo", since scanning the two strings from left to right, the longest common set of characters is composed of the first "f" and then the "o". LCS is very useful in order to evaluate how similar two strings are. Strings can represent many things. For instance if two strings are DNA sequences, the LCS will provide a measure of similarity between the two DNA sequences. If the strings represent some text edited by some user, the LCS could represent how different the new text is compared to the old one, and so forth. Note that this algorithm runs in `O(N*M)` time, where N is the length of the first string and M is the length of the second string. So either spin a different Redis instance in order to run this algorithm, or make sure to run it against very small strings. ``` > MSET key1 ohmytext key2 mynewtext OK > LCS key1 key2 "mytext" ``` Sometimes we need just the length of the match: ``` > LCS key1 key2 LEN (integer) 6 ``` However what is often very useful, is to know the match position in each strings: ``` > LCS key1 key2 IDX 1) "matches" 2) 1) 1) 1) (integer) 4 2) (integer) 7 2) 1) (integer) 5 2) (integer) 8 2) 1) 1) (integer) 2 2) (integer) 3 2) 1) (integer) 0 2) (integer) 1 3) "len" 4) (integer) 6 ``` Matches are produced from the last one to the first one, since this is how the algorithm works, and it more efficient to emit things in the same order. The above array means that the first match (second element of the array) is between positions 2-3 of the first string and 0-1 of the second. Then there is another match between 4-7 and 5-8. To restrict the list of matches to the ones of a given minimal length: ``` > LCS key1 key2 IDX MINMATCHLEN 4 1) "matches" 2) 1) 1) 1) (integer) 4 2) (integer) 7 2) 1) (integer) 5 2) (integer) 8 3) "len" 4) (integer) 6 ``` Finally to also have the match len: ``` > LCS key1 key2 IDX MINMATCHLEN 4 WITHMATCHLEN 1) "matches" 2) 1) 1) 1) (integer) 4 2) (integer) 7 2) 1) (integer) 5 2) (integer) 8 3) (integer) 4 3) "len" 4) (integer) 6 ``` Return ------ * Without modifiers the string representing the longest common substring is returned. * When `LEN` is given the command returns the length of the longest common substring. * When `IDX` is given the command returns an array with the LCS length and all the ranges in both the strings, start and end offset for each string, where there are matches. When `WITHMATCHLEN` is given each array representing a match will also have the length of the match (see examples). redis HSCAN HSCAN ===== ``` HSCAN ``` Syntax ``` HSCAN key cursor [MATCH pattern] [COUNT count] ``` Available since: 2.8.0 Time complexity: O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection. ACL categories: `@read`, `@hash`, `@slow`, See [`SCAN`](../scan) for `HSCAN` documentation. redis BF.MADD BF.MADD ======= ``` BF.MADD ``` Syntax ``` BF.MADD key item [item ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(k \* n), where k is the number of hash functions and n is the number of items Adds one or more items to the Bloom Filter and creates the filter if it does not exist yet. This command operates identically to [`BF.ADD`](../bf.add) except that it allows multiple inputs and returns multiple values. ### Parameters * **key**: The name of the filter * **item**: One or more items to add Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - for each item which is either "1" or "0" depending on whether the corresponding input element was newly added to the filter or may have previously existed. Examples -------- ``` redis> BF.MADD bf item1 item2 1) (integer) 0 2) (integer) 1 ``` redis UNLINK UNLINK ====== ``` UNLINK ``` Syntax ``` UNLINK key [key ...] ``` Available since: 4.0.0 Time complexity: O(1) for each key removed regardless of its size. Then the command does O(N) work in a different thread in order to reclaim memory, where N is the number of allocations the deleted objects where composed of. ACL categories: `@keyspace`, `@write`, `@fast`, This command is very similar to [`DEL`](../del): it removes the specified keys. Just like [`DEL`](../del) a key is ignored if it does not exist. However the command performs the actual memory reclaiming in a different thread, so it is not blocking, while [`DEL`](../del) is. This is where the command name comes from: the command just **unlinks** the keys from the keyspace. The actual removal will happen later asynchronously. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): The number of keys that were unlinked. Examples -------- ``` SET key1 "Hello" SET key2 "World" UNLINK key1 key2 key3 ``` redis SETRANGE SETRANGE ======== ``` SETRANGE ``` Syntax ``` SETRANGE key offset value ``` Available since: 2.2.0 Time complexity: O(1), not counting the time taken to copy the new string in place. Usually, this string is very small so the amortized complexity is O(1). Otherwise, complexity is O(M) with M being the length of the value argument. ACL categories: `@write`, `@string`, `@slow`, Overwrites part of the string stored at *key*, starting at the specified offset, for the entire length of *value*. If the offset is larger than the current length of the string at *key*, the string is padded with zero-bytes to make *offset* fit. Non-existing keys are considered as empty strings, so this command will make sure it holds a string large enough to be able to set *value* at *offset*. Note that the maximum offset that you can set is 2^29 -1 (536870911), as Redis Strings are limited to 512 megabytes. If you need to grow beyond this size, you can use multiple keys. **Warning**: When setting the last possible byte and the string value stored at *key* does not yet hold a string value, or holds a small string value, Redis needs to allocate all intermediate memory which can block the server for some time. On a 2010 MacBook Pro, setting byte number 536870911 (512MB allocation) takes ~300ms, setting byte number 134217728 (128MB allocation) takes ~80ms, setting bit number 33554432 (32MB allocation) takes ~30ms and setting bit number 8388608 (8MB allocation) takes ~8ms. Note that once this first allocation is done, subsequent calls to `SETRANGE` for the same *key* will not have the allocation overhead. Patterns -------- Thanks to `SETRANGE` and the analogous [`GETRANGE`](../getrange) commands, you can use Redis strings as a linear array with O(1) random access. This is a very fast and efficient storage in many real world use cases. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the length of the string after it was modified by the command. Examples -------- Basic usage: ``` SET key1 "Hello World" SETRANGE key1 6 "Redis" GET key1 ``` Example of zero padding: ``` SETRANGE key2 6 "Redis" GET key2 ``` redis HMSET HMSET ===== ``` HMSET (deprecated) ``` As of Redis version 4.0.0, this command is regarded as deprecated. It can be replaced by [`HSET`](../hset) with multiple field-value pairs when migrating or writing new code. Syntax ``` HMSET key field value [field value ...] ``` Available since: 2.0.0 Time complexity: O(N) where N is the number of fields being set. ACL categories: `@write`, `@hash`, `@fast`, Sets the specified fields to their respective values in the hash stored at `key`. This command overwrites any specified fields already existing in the hash. If `key` does not exist, a new key holding a hash is created. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) Examples -------- ``` HMSET myhash field1 "Hello" field2 "World" HGET myhash field1 HGET myhash field2 ``` redis SCRIPT SCRIPT ====== ``` SCRIPT LOAD ``` Syntax ``` SCRIPT LOAD script ``` Available since: 2.6.0 Time complexity: O(N) with N being the length in bytes of the script body. ACL categories: `@slow`, `@scripting`, Load a script into the scripts cache, without executing it. After the specified command is loaded into the script cache it will be callable using [`EVALSHA`](../evalsha) with the correct SHA1 digest of the script, exactly like after the first successful invocation of [`EVAL`](../eval). The script is guaranteed to stay in the script cache forever (unless `SCRIPT FLUSH` is called). The command works in the same way even if the script was already present in the script cache. For more information about [`EVAL`](../eval) scripts please refer to [Introduction to Eval Scripts](https://redis.io/topics/eval-intro). Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings) This command returns the SHA1 digest of the script added into the script cache. redis RESTORE-ASKING RESTORE-ASKING ============== ``` RESTORE-ASKING ``` Syntax ``` RESTORE-ASKING key ttl serialized-value [REPLACE] [ABSTTL] [IDLETIME seconds] [FREQ frequency] ``` Available since: 3.0.0 Time complexity: O(1) to create the new key and additional O(N\*M) to reconstruct the serialized value, where N is the number of Redis objects composing the value and M their average size. For small string values the time complexity is thus O(1)+O(1\*M) where M is small, so simply O(1). However for sorted set values the complexity is O(N\*M\*log(N)) because inserting values into sorted sets is O(log(N)). ACL categories: `@keyspace`, `@write`, `@slow`, `@dangerous`, The `RESTORE-ASKING` command is an internal command. It is used by a Redis cluster master during slot migration. History ------- * Starting with Redis version 3.0.0: Added the `REPLACE` modifier. * Starting with Redis version 5.0.0: Added the `ABSTTL` modifier. * Starting with Redis version 5.0.0: Added the `IDLETIME` and `FREQ` options. redis XREAD XREAD ===== ``` XREAD ``` Syntax ``` XREAD [COUNT count] [BLOCK milliseconds] STREAMS key [key ...] id [id ...] ``` Available since: 5.0.0 Time complexity: ACL categories: `@read`, `@stream`, `@slow`, `@blocking`, Read data from one or multiple streams, only returning entries with an ID greater than the last received ID reported by the caller. This command has an option to block if items are not available, in a similar fashion to [`BRPOP`](../brpop) or [`BZPOPMIN`](../bzpopmin) and others. Please note that before reading this page, if you are new to streams, we recommend to read [our introduction to Redis Streams](https://redis.io/topics/streams-intro). Non-blocking usage ------------------ If the **BLOCK** option is not used, the command is synchronous, and can be considered somewhat related to [`XRANGE`](../xrange): it will return a range of items inside streams, however it has two fundamental differences compared to [`XRANGE`](../xrange) even if we just consider the synchronous usage: * This command can be called with multiple streams if we want to read at the same time from a number of keys. This is a key feature of `XREAD` because especially when blocking with **BLOCK**, to be able to listen with a single connection to multiple keys is a vital feature. * While [`XRANGE`](../xrange) returns items in a range of IDs, `XREAD` is more suited in order to consume the stream starting from the first entry which is greater than any other entry we saw so far. So what we pass to `XREAD` is, for each stream, the ID of the last element that we received from that stream. For example, if I have two streams `mystream` and `writers`, and I want to read data from both the streams starting from the first element they contain, I could call `XREAD` like in the following example. Note: we use the **COUNT** option in the example, so that for each stream the call will return at maximum two elements per stream. ``` > XREAD COUNT 2 STREAMS mystream writers 0-0 0-0 1) 1) "mystream" 2) 1) 1) 1526984818136-0 2) 1) "duration" 2) "1532" 3) "event-id" 4) "5" 5) "user-id" 6) "7782813" 2) 1) 1526999352406-0 2) 1) "duration" 2) "812" 3) "event-id" 4) "9" 5) "user-id" 6) "388234" 2) 1) "writers" 2) 1) 1) 1526985676425-0 2) 1) "name" 2) "Virginia" 3) "surname" 4) "Woolf" 2) 1) 1526985685298-0 2) 1) "name" 2) "Jane" 3) "surname" 4) "Austen" ``` The **STREAMS** option is mandatory and MUST be the final option because such option gets a variable length of argument in the following format: ``` STREAMS key_1 key_2 key_3 ... key_N ID_1 ID_2 ID_3 ... ID_N ``` So we start with a list of keys, and later continue with all the associated IDs, representing *the last ID we received for that stream*, so that the call will serve us only greater IDs from the same stream. For instance in the above example, the last items that we received for the stream `mystream` has ID `1526999352406-0`, while for the stream `writers` has the ID `1526985685298-0`. To continue iterating the two streams I'll call: ``` > XREAD COUNT 2 STREAMS mystream writers 1526999352406-0 1526985685298-0 1) 1) "mystream" 2) 1) 1) 1526999626221-0 2) 1) "duration" 2) "911" 3) "event-id" 4) "7" 5) "user-id" 6) "9488232" 2) 1) "writers" 2) 1) 1) 1526985691746-0 2) 1) "name" 2) "Toni" 3) "surname" 4) "Morrison" 2) 1) 1526985712947-0 2) 1) "name" 2) "Agatha" 3) "surname" 4) "Christie" ``` And so forth. Eventually, the call will not return any item, but just an empty array, then we know that there is nothing more to fetch from our stream (and we would have to retry the operation, hence this command also supports a blocking mode). Incomplete IDs -------------- To use incomplete IDs is valid, like it is valid for [`XRANGE`](../xrange). However here the sequence part of the ID, if missing, is always interpreted as zero, so the command: ``` > XREAD COUNT 2 STREAMS mystream writers 0 0 ``` is exactly equivalent to ``` > XREAD COUNT 2 STREAMS mystream writers 0-0 0-0 ``` Blocking for data ----------------- In its synchronous form, the command can get new data as long as there are more items available. However, at some point, we'll have to wait for producers of data to use [`XADD`](../xadd) to push new entries inside the streams we are consuming. In order to avoid polling at a fixed or adaptive interval the command is able to block if it could not return any data, according to the specified streams and IDs, and automatically unblock once one of the requested keys accept data. It is important to understand that this command *fans out* to all the clients that are waiting for the same range of IDs, so every consumer will get a copy of the data, unlike to what happens when blocking list pop operations are used. In order to block, the **BLOCK** option is used, together with the number of milliseconds we want to block before timing out. Normally Redis blocking commands take timeouts in seconds, however this command takes a millisecond timeout, even if normally the server will have a timeout resolution near to 0.1 seconds. This time it is possible to block for a shorter time in certain use cases, and if the server internals will improve over time, it is possible that the resolution of timeouts will improve. When the **BLOCK** command is passed, but there is data to return at least in one of the streams passed, the command is executed synchronously *exactly like if the BLOCK option would be missing*. This is an example of blocking invocation, where the command later returns a null reply because the timeout has elapsed without new data arriving: ``` > XREAD BLOCK 1000 STREAMS mystream 1526999626221-0 (nil) ``` The special `$` ID. ------------------- When blocking sometimes we want to receive just entries that are added to the stream via [`XADD`](../xadd) starting from the moment we block. In such a case we are not interested in the history of already added entries. For this use case, we would have to check the stream top element ID, and use such ID in the `XREAD` command line. This is not clean and requires to call other commands, so instead it is possible to use the special `$` ID to signal the stream that we want only the new things. It is **very important** to understand that you should use the `$` ID only for the first call to `XREAD`. Later the ID should be the one of the last reported item in the stream, otherwise you could miss all the entries that are added in between. This is how a typical `XREAD` call looks like in the first iteration of a consumer willing to consume only new entries: ``` > XREAD BLOCK 5000 COUNT 100 STREAMS mystream $ ``` Once we get some replies, the next call will be something like: ``` > XREAD BLOCK 5000 COUNT 100 STREAMS mystream 1526999644174-3 ``` And so forth. How multiple clients blocked on a single stream are served ---------------------------------------------------------- Blocking list operations on lists or sorted sets have a *pop* behavior. Basically, the element is removed from the list or sorted set in order to be returned to the client. In this scenario you want the items to be consumed in a fair way, depending on the moment clients blocked on a given key arrived. Normally Redis uses the FIFO semantics in this use cases. However note that with streams this is not a problem: stream entries are not removed from the stream when clients are served, so every client waiting will be served as soon as an [`XADD`](../xadd) command provides data to the stream. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays), specifically: The command returns an array of results: each element of the returned array is an array composed of a two element containing the key name and the entries reported for that key. The entries reported are full stream entries, having IDs and the list of all the fields and values. Field and values are guaranteed to be reported in the same order they were added by [`XADD`](../xadd). When **BLOCK** is used, on timeout a null reply is returned. Reading the [Redis Streams introduction](https://redis.io/topics/streams-intro) is highly suggested in order to understand more about the streams overall behavior and semantics.
programming_docs
redis FT.SUGLEN FT.SUGLEN ========= ``` FT.SUGLEN ``` Syntax ``` FT.SUGLEN key ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.0.0](https://redis.io/docs/stack/search) Time complexity: O(1) Get the size of an auto-complete suggestion dictionary [Examples](#examples) Required arguments ------------------ `key` is suggestion dictionary key. Return ------ FT.SUGLEN returns an integer reply, which is the current size of the suggestion dictionary. Examples -------- **Get the size of an auto-complete suggestion dictionary** ``` 127.0.0.1:6379> FT.SUGLEN sug (integer) 2 ``` See also -------- [`FT.SUGADD`](../ft.sugadd) | [`FT.SUGDEL`](../ft.sugdel) | [`FT.SUGGET`](../ft.sugget) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) redis AUTH AUTH ==== ``` AUTH ``` Syntax ``` AUTH [username] password ``` Available since: 1.0.0 Time complexity: O(N) where N is the number of passwords defined for the user ACL categories: `@fast`, `@connection`, The AUTH command authenticates the current connection in two cases: 1. If the Redis server is password protected via the `requirepass` option. 2. A Redis 6.0 instance, or greater, is using the [Redis ACL system](https://redis.io/topics/acl). Redis versions prior of Redis 6 were only able to understand the one argument version of the command: ``` AUTH <password> ``` This form just authenticates against the password set with `requirepass`. In this configuration Redis will deny any command executed by the just connected clients, unless the connection gets authenticated via `AUTH`. If the password provided via AUTH matches the password in the configuration file, the server replies with the `OK` status code and starts accepting commands. Otherwise, an error is returned and the clients needs to try a new password. When Redis ACLs are used, the command should be given in an extended way: ``` AUTH <username> <password> ``` In order to authenticate the current connection with one of the connections defined in the ACL list (see [`ACL SETUSER`](../acl-setuser)) and the official [ACL guide](https://redis.io/topics/acl) for more information. When ACLs are used, the single argument form of the command, where only the password is specified, assumes that the implicit username is "default". Security notice --------------- Because of the high performance nature of Redis, it is possible to try a lot of passwords in parallel in very short time, so make sure to generate a strong and very long password so that this attack is infeasible. A good way to generate strong passwords is via the [`ACL GENPASS`](../acl-genpass) command. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) or an error if the password, or username/password pair, is invalid. History ------- * Starting with Redis version 6.0.0: Added ACL style (username and password). redis CLUSTER CLUSTER ======= ``` CLUSTER SET-CONFIG-EPOCH ``` Syntax ``` CLUSTER SET-CONFIG-EPOCH config-epoch ``` Available since: 3.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, This command sets a specific *config epoch* in a fresh node. It only works when: 1. The nodes table of the node is empty. 2. The node current *config epoch* is zero. These prerequisites are needed since usually, manually altering the configuration epoch of a node is unsafe, we want to be sure that the node with the higher configuration epoch value (that is the last that failed over) wins over other nodes in claiming the hash slots ownership. However there is an exception to this rule, and it is when a new cluster is created from scratch. Redis Cluster *config epoch collision resolution* algorithm can deal with new nodes all configured with the same configuration at startup, but this process is slow and should be the exception, only to make sure that whatever happens, two more nodes eventually always move away from the state of having the same configuration epoch. So, using `CLUSTER SET-CONFIG-EPOCH`, when a new cluster is created, we can assign a different progressive configuration epoch to each node before joining the cluster together. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if the command was executed successfully, otherwise an error is returned. redis FT.CONFIG FT.CONFIG ========= ``` FT.CONFIG GET ``` Syntax ``` FT.CONFIG GET option ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.0.0](https://redis.io/docs/stack/search) Time complexity: O(1) Retrieve configuration options [Examples](#examples) Required arguments ------------------ `option` is name of the configuration option, or '\*' for all. Return ------ FT.CONFIG GET returns an array reply of the configuration name and value. Examples -------- **Retrieve configuration options** ``` 127.0.0.1:6379> FT.CONFIG GET TIMEOUT 1) 1) TIMEOUT 2) 42 ``` ``` 127.0.0.1:6379> FT.CONFIG GET \* 1) 1) EXTLOAD 2) (nil) 2) 1) SAFEMODE 2) true 3) 1) CONCURRENT\_WRITE\_MODE 2) false 4) 1) NOGC 2) false 5) 1) MINPREFIX 2) 2 6) 1) FORKGC\_SLEEP\_BEFORE\_EXIT 2) 0 7) 1) MAXDOCTABLESIZE 2) 1000000 8) 1) MAXSEARCHRESULTS 2) 1000000 9) 1) MAXAGGREGATERESULTS 2) unlimited 10) 1) MAXEXPANSIONS 2) 200 11) 1) MAXPREFIXEXPANSIONS 2) 200 12) 1) TIMEOUT 2) 42 13) 1) INDEX\_THREADS 2) 8 14) 1) SEARCH\_THREADS 2) 20 15) 1) FRISOINI 2) (nil) 16) 1) ON\_TIMEOUT 2) return 17) 1) GCSCANSIZE 2) 100 18) 1) MIN\_PHONETIC\_TERM\_LEN 2) 3 19) 1) GC\_POLICY 2) fork 20) 1) FORK\_GC\_RUN\_INTERVAL 2) 30 21) 1) FORK\_GC\_CLEAN\_THRESHOLD 2) 100 22) 1) FORK\_GC\_RETRY\_INTERVAL 2) 5 23) 1) FORK\_GC\_CLEAN\_NUMERIC\_EMPTY\_NODES 2) true 24) 1) \_FORK\_GC\_CLEAN\_NUMERIC\_EMPTY\_NODES 2) true 25) 1) \_MAX\_RESULTS\_TO\_UNSORTED\_MODE 2) 1000 26) 1) UNION\_ITERATOR\_HEAP 2) 20 27) 1) CURSOR\_MAX\_IDLE 2) 300000 28) 1) NO\_MEM\_POOLS 2) false 29) 1) PARTIAL\_INDEXED\_DOCS 2) false 30) 1) UPGRADE\_INDEX 2) Upgrade config for upgrading 31) 1) \_NUMERIC\_COMPRESS 2) false 32) 1) \_FREE\_RESOURCE\_ON\_THREAD 2) true 33) 1) \_PRINT\_PROFILE\_CLOCK 2) true 34) 1) RAW\_DOCID\_ENCODING 2) false 35) 1) \_NUMERIC\_RANGES\_PARENTS 2) 0 ``` See also -------- [`FT.CONFIG SET`](../ft.config-set) | [`FT.CONFIG HELP`](../ft.config-help) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) redis FUNCTION FUNCTION ======== ``` FUNCTION DELETE ``` Syntax ``` FUNCTION DELETE library-name ``` Available since: 7.0.0 Time complexity: O(1) ACL categories: `@write`, `@slow`, `@scripting`, Delete a library and all its functions. This command deletes the library called *library-name* and all functions in it. If the library doesn't exist, the server returns an error. For more information please refer to [Introduction to Redis Functions](https://redis.io/topics/functions-intro). Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) Examples -------- ``` redis> FUNCTION LOAD Lua mylib "redis.register_function('myfunc', function(keys, args) return 'hello' end)" OK redis> FCALL myfunc 0 "hello" redis> FUNCTION DELETE mylib OK redis> FCALL myfunc 0 (error) ERR Function not found ``` redis SCRIPT SCRIPT ====== ``` SCRIPT EXISTS ``` Syntax ``` SCRIPT EXISTS sha1 [sha1 ...] ``` Available since: 2.6.0 Time complexity: O(N) with N being the number of scripts to check (so checking a single script is an O(1) operation). ACL categories: `@slow`, `@scripting`, Returns information about the existence of the scripts in the script cache. This command accepts one or more SHA1 digests and returns a list of ones or zeros to signal if the scripts are already defined or not inside the script cache. This can be useful before a pipelining operation to ensure that scripts are loaded (and if not, to load them using [`SCRIPT LOAD`](../script-load)) so that the pipelining operation can be performed solely using [`EVALSHA`](../evalsha) instead of [`EVAL`](../eval) to save bandwidth. For more information about [`EVAL`](../eval) scripts please refer to [Introduction to Eval Scripts](https://redis.io/topics/eval-intro). Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) The command returns an array of integers that correspond to the specified SHA1 digest arguments. For every corresponding SHA1 digest of a script that actually exists in the script cache, a 1 is returned, otherwise 0 is returned. redis TDIGEST.INFO TDIGEST.INFO ============ ``` TDIGEST.INFO ``` Syntax ``` TDIGEST.INFO key ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.4.0](https://redis.io/docs/stack/bloom) Time complexity: O(1) Returns information and statistics about a t-digest sketch. Required arguments ------------------ `key` is key name for an existing t-digest sketch. Return value ------------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) with information about the sketch: | Name | Description | | --- | --- | | `Compression` | The compression (controllable trade-off between accuracy and memory consumption) of the sketch | | `Capacity` | Size of the buffer used for storing the centroids and for the incoming unmerged observations | | `Merged nodes` | Number of merged observations | | `Unmerged nodes` | Number of buffered nodes (uncompressed observations) | | `Merged weight` | Weight of values of the merged nodes | | `Unmerged weight` | Weight of values of the unmerged nodes (uncompressed observations) | | `Observations` | Number of observations added to the sketch | | `Total compressions` | Number of times this sketch compressed data together | | `Memory usage` | Number of bytes allocated for the sketch | Examples -------- ``` redis> TDIGEST.CREATE t OK redis> TDIGEST.ADD t 1 2 3 4 5 OK redis> TDIGEST.INFO t 1) Compression 2) (integer) 100 3) Capacity 4) (integer) 610 5) Merged nodes 6) (integer) 0 7) Unmerged nodes 8) (integer) 5 9) Merged weight 10) (integer) 0 11) Unmerged weight 12) (integer) 5 13) Observations 14) (integer) 5 15) Total compressions 16) (integer) 0 17) Memory usage 18) (integer) 9768 ``` redis RPOP RPOP ==== ``` RPOP ``` Syntax ``` RPOP key [count] ``` Available since: 1.0.0 Time complexity: O(N) where N is the number of elements returned ACL categories: `@write`, `@list`, `@fast`, Removes and returns the last elements of the list stored at `key`. By default, the command pops a single element from the end of the list. When provided with the optional `count` argument, the reply will consist of up to `count` elements, depending on the list's length. Return ------ When called without the `count` argument: [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the value of the last element, or `nil` when `key` does not exist. When called with the `count` argument: [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of popped elements, or `nil` when `key` does not exist. Examples -------- ``` RPUSH mylist "one" "two" "three" "four" "five" RPOP mylist RPOP mylist 2 LRANGE mylist 0 -1 ``` History ------- * Starting with Redis version 6.2.0: Added the `count` argument. redis CLIENT CLIENT ====== ``` CLIENT GETNAME ``` Syntax ``` CLIENT GETNAME ``` Available since: 2.6.9 Time complexity: O(1) ACL categories: `@slow`, `@connection`, The `CLIENT GETNAME` returns the name of the current connection as set by [`CLIENT SETNAME`](../client-setname). Since every new connection starts without an associated name, if no name was assigned a null bulk reply is returned. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): The connection name, or a null bulk reply if no name is set. redis TS.DELETERULE TS.DELETERULE ============= ``` TS.DELETERULE ``` Syntax ``` TS.DELETERULE sourceKey destKey ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [TimeSeries 1.0.0](https://redis.io/docs/stack/timeseries) Time complexity: O(1) Delete a compaction rule Required arguments ------------------ `sourceKey` is key name for the source time series. `destKey` is key name for destination (compacted) time series. **Note:** This command does not delete the compacted series. Return value ------------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) - `OK` if executed correctly, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) otherwise. See also -------- [`TS.CREATERULE`](../ts.createrule) Related topics -------------- [RedisTimeSeries](https://redis.io/docs/stack/timeseries) redis CLIENT CLIENT ====== ``` CLIENT REPLY ``` Syntax ``` CLIENT REPLY <ON | OFF | SKIP> ``` Available since: 3.2.0 Time complexity: O(1) ACL categories: `@slow`, `@connection`, Sometimes it can be useful for clients to completely disable replies from the Redis server. For example when the client sends fire and forget commands or performs a mass loading of data, or in caching contexts where new data is streamed constantly. In such contexts to use server time and bandwidth in order to send back replies to clients, which are going to be ignored, is considered wasteful. The `CLIENT REPLY` command controls whether the server will reply the client's commands. The following modes are available: * `ON`. This is the default mode in which the server returns a reply to every command. * `OFF`. In this mode the server will not reply to client commands. * `SKIP`. This mode skips the reply of command immediately after it. Return ------ When called with either `OFF` or `SKIP` subcommands, no reply is made. When called with `ON`: [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK`. redis BITPOS BITPOS ====== ``` BITPOS ``` Syntax ``` BITPOS key bit [start [end [BYTE | BIT]]] ``` Available since: 2.8.7 Time complexity: O(N) ACL categories: `@read`, `@bitmap`, `@slow`, Return the position of the first bit set to 1 or 0 in a string. The position is returned, thinking of the string as an array of bits from left to right, where the first byte's most significant bit is at position 0, the second byte's most significant bit is at position 8, and so forth. The same bit position convention is followed by [`GETBIT`](../getbit) and [`SETBIT`](../setbit). By default, all the bytes contained in the string are examined. It is possible to look for bits only in a specified interval passing the additional arguments *start* and *end* (it is possible to just pass *start*, the operation will assume that the end is the last byte of the string. However there are semantic differences as explained later). By default, the range is interpreted as a range of bytes and not a range of bits, so `start=0` and `end=2` means to look at the first three bytes. You can use the optional `BIT` modifier to specify that the range should be interpreted as a range of bits. So `start=0` and `end=2` means to look at the first three bits. Note that bit positions are returned always as absolute values starting from bit zero even when *start* and *end* are used to specify a range. Like for the [`GETRANGE`](../getrange) command start and end can contain negative values in order to index bytes starting from the end of the string, where -1 is the last byte, -2 is the penultimate, and so forth. When `BIT` is specified, -1 is the last bit, -2 is the penultimate, and so forth. Non-existent keys are treated as empty strings. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) The command returns the position of the first bit set to 1 or 0 according to the request. If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is returned. If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns the first bit not part of the string on the right. So if the string is three bytes set to the value `0xff` the command `BITPOS key 0` will return 24, since up to bit 23 all the bits are 1. Basically, the function considers the right of the string as padded with zeros if you look for clear bits and specify no range or the *start* argument **only**. However, this behavior changes if you are looking for clear bits and specify a range with both **start** and **end**. If no clear bit is found in the specified range, the function returns -1 as the user specified a clear range and there are no 0 bits in that range. Examples -------- ``` SET mykey "\xff\xf0\x00" BITPOS mykey 0 SET mykey "\x00\xff\xf0" BITPOS mykey 1 0 BITPOS mykey 1 2 BITPOS mykey 1 2 -1 BYTE BITPOS mykey 1 7 15 BIT set mykey "\x00\x00\x00" BITPOS mykey 1 BITPOS mykey 1 7 -3 BIT ``` History ------- * Starting with Redis version 7.0.0: Added the `BYTE|BIT` option. redis LSET LSET ==== ``` LSET ``` Syntax ``` LSET key index element ``` Available since: 1.0.0 Time complexity: O(N) where N is the length of the list. Setting either the first or the last element of the list is O(1). ACL categories: `@write`, `@list`, `@slow`, Sets the list element at `index` to `element`. For more information on the `index` argument, see [`LINDEX`](../lindex). An error is returned for out of range indexes. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) Examples -------- ``` RPUSH mylist "one" RPUSH mylist "two" RPUSH mylist "three" LSET mylist 0 "four" LSET mylist -2 "five" LRANGE mylist 0 -1 ``` redis INCRBYFLOAT INCRBYFLOAT =========== ``` INCRBYFLOAT ``` Syntax ``` INCRBYFLOAT key increment ``` Available since: 2.6.0 Time complexity: O(1) ACL categories: `@write`, `@string`, `@fast`, Increment the string representing a floating point number stored at `key` by the specified `increment`. By using a negative `increment` value, the result is that the value stored at the key is decremented (by the obvious properties of addition). If the key does not exist, it is set to `0` before performing the operation. An error is returned if one of the following conditions occur: * The key contains a value of the wrong type (not a string). * The current key content or the specified increment are not parsable as a double precision floating point number. If the command is successful the new incremented value is stored as the new value of the key (replacing the old one), and returned to the caller as a string. Both the value already contained in the string key and the increment argument can be optionally provided in exponential notation, however the value computed after the increment is stored consistently in the same format, that is, an integer number followed (if needed) by a dot, and a variable number of digits representing the decimal part of the number. Trailing zeroes are always removed. The precision of the output is fixed at 17 digits after the decimal point regardless of the actual internal precision of the computation. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the value of `key` after the increment. Examples -------- ``` SET mykey 10.50 INCRBYFLOAT mykey 0.1 INCRBYFLOAT mykey -5 SET mykey 5.0e3 INCRBYFLOAT mykey 2.0e2 ``` Implementation details ---------------------- The command is always propagated in the replication link and the Append Only File as a [`SET`](../set) operation, so that differences in the underlying floating point math implementation will not be sources of inconsistency. redis EXISTS EXISTS ====== ``` EXISTS ``` Syntax ``` EXISTS key [key ...] ``` Available since: 1.0.0 Time complexity: O(N) where N is the number of keys to check. ACL categories: `@keyspace`, `@read`, `@fast`, Returns if `key` exists. The user should be aware that if the same existing key is mentioned in the arguments multiple times, it will be counted multiple times. So if `somekey` exists, `EXISTS somekey somekey` will return 2. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically the number of keys that exist from those specified as arguments. Examples -------- ``` SET key1 "Hello" EXISTS key1 EXISTS nosuchkey SET key2 "World" EXISTS key1 key2 nosuchkey ``` History ------- * Starting with Redis version 3.0.3: Accepts multiple `key` arguments.
programming_docs
redis PUNSUBSCRIBE PUNSUBSCRIBE ============ ``` PUNSUBSCRIBE ``` Syntax ``` PUNSUBSCRIBE [pattern [pattern ...]] ``` Available since: 2.0.0 Time complexity: O(N+M) where N is the number of patterns the client is already subscribed and M is the number of total patterns subscribed in the system (by any client). ACL categories: `@pubsub`, `@slow`, Unsubscribes the client from the given patterns, or from all of them if none is given. When no patterns are specified, the client is unsubscribed from all the previously subscribed patterns. In this case, a message for every unsubscribed pattern will be sent to the client. Return ------ When successful, this command doesn't return anything. Instead, for each pattern, one message with the first element being the string "punsubscribe" is pushed as a confirmation that the command succeeded. redis CLUSTER CLUSTER ======= ``` CLUSTER FAILOVER ``` Syntax ``` CLUSTER FAILOVER [FORCE | TAKEOVER] ``` Available since: 3.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, This command, that can only be sent to a Redis Cluster replica node, forces the replica to start a manual failover of its master instance. A manual failover is a special kind of failover that is usually executed when there are no actual failures, but we wish to swap the current master with one of its replicas (which is the node we send the command to), in a safe way, without any window for data loss. It works in the following way: 1. The replica tells the master to stop processing queries from clients. 2. The master replies to the replica with the current *replication offset*. 3. The replica waits for the replication offset to match on its side, to make sure it processed all the data from the master before it continues. 4. The replica starts a failover, obtains a new configuration epoch from the majority of the masters, and broadcasts the new configuration. 5. The old master receives the configuration update: unblocks its clients and starts replying with redirection messages so that they'll continue the chat with the new master. This way clients are moved away from the old master to the new master atomically and only when the replica that is turning into the new master has processed all of the replication stream from the old master. FORCE option: manual failover when the master is down ----------------------------------------------------- The command behavior can be modified by two options: **FORCE** and **TAKEOVER**. If the **FORCE** option is given, the replica does not perform any handshake with the master, that may be not reachable, but instead just starts a failover ASAP starting from point 4. This is useful when we want to start a manual failover while the master is no longer reachable. However using **FORCE** we still need the majority of masters to be available in order to authorize the failover and generate a new configuration epoch for the replica that is going to become master. TAKEOVER option: manual failover without cluster consensus ---------------------------------------------------------- There are situations where this is not enough, and we want a replica to failover without any agreement with the rest of the cluster. A real world use case for this is to mass promote replicas in a different data center to masters in order to perform a data center switch, while all the masters are down or partitioned away. The **TAKEOVER** option implies everything **FORCE** implies, but also does not uses any cluster authorization in order to failover. A replica receiving `CLUSTER FAILOVER TAKEOVER` will instead: 1. Generate a new `configEpoch` unilaterally, just taking the current greatest epoch available and incrementing it if its local configuration epoch is not already the greatest. 2. Assign itself all the hash slots of its master, and propagate the new configuration to every node which is reachable ASAP, and eventually to every other node. Note that **TAKEOVER violates the last-failover-wins principle** of Redis Cluster, since the configuration epoch generated by the replica violates the normal generation of configuration epochs in several ways: 1. There is no guarantee that it is actually the higher configuration epoch, since, for example, we can use the **TAKEOVER** option within a minority, nor any message exchange is performed to generate the new configuration epoch. 2. If we generate a configuration epoch which happens to collide with another instance, eventually our configuration epoch, or the one of another instance with our same epoch, will be moved away using the *configuration epoch collision resolution algorithm*. Because of this the **TAKEOVER** option should be used with care. Implementation details and notes -------------------------------- * `CLUSTER FAILOVER`, unless the **TAKEOVER** option is specified, does not execute a failover synchronously. It only *schedules* a manual failover, bypassing the failure detection stage. * An `OK` reply is no guarantee that the failover will succeed. * A replica can only be promoted to a master if it is known as a replica by a majority of the masters in the cluster. If the replica is a new node that has just been added to the cluster (for example after upgrading it), it may not yet be known to all the masters in the cluster. To check that the masters are aware of a new replica, you can send [`CLUSTER NODES`](../cluster-nodes) or [`CLUSTER REPLICAS`](../cluster-replicas) to each of the master nodes and check that it appears as a replica, before sending `CLUSTER FAILOVER` to the replica. * To check that the failover has actually happened you can use [`ROLE`](../role), `INFO REPLICATION` (which indicates "role:master" after successful failover), or [`CLUSTER NODES`](../cluster-nodes) to verify that the state of the cluster has changed sometime after the command was sent. * To check if the failover has failed, check the replica's log for "Manual failover timed out", which is logged if the replica has given up after a few seconds. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if the command was accepted and a manual failover is going to be attempted. An error if the operation cannot be executed, for example if we are talking with a node which is already a master. redis LATENCY LATENCY ======= ``` LATENCY HISTORY ``` Syntax ``` LATENCY HISTORY event ``` Available since: 2.8.13 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, The `LATENCY HISTORY` command returns the raw data of the `event`'s latency spikes time series. This is useful to an application that wants to fetch raw data in order to perform monitoring, display graphs, and so forth. The command will return up to 160 timestamp-latency pairs for the `event`. Valid values for `event` are: * `active-defrag-cycle` * `aof-fsync-always` * `aof-stat` * `aof-rewrite-diff-write` * `aof-rename` * `aof-write` * `aof-write-active-child` * `aof-write-alone` * `aof-write-pending-fsync` * `command` * `expire-cycle` * `eviction-cycle` * `eviction-del` * `fast-command` * `fork` * `rdb-unlink-temp-file` Examples -------- ``` 127.0.0.1:6379> latency history command 1) 1) (integer) 1405067822 2) (integer) 251 2) 1) (integer) 1405067941 2) (integer) 1001 ``` For more information refer to the [Latency Monitoring Framework page](https://redis.io/topics/latency-monitor). Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): specifically: The command returns an array where each element is a two elements array representing the timestamp and the latency of the event. redis SSUBSCRIBE SSUBSCRIBE ========== ``` SSUBSCRIBE ``` Syntax ``` SSUBSCRIBE shardchannel [shardchannel ...] ``` Available since: 7.0.0 Time complexity: O(N) where N is the number of shard channels to subscribe to. ACL categories: `@pubsub`, `@slow`, Subscribes the client to the specified shard channels. In a Redis cluster, shard channels are assigned to slots by the same algorithm used to assign keys to slots. Client(s) can subscribe to a node covering a slot (primary/replica) to receive the messages published. All the specified shard channels needs to belong to a single slot to subscribe in a given `SSUBSCRIBE` call, A client can subscribe to channels across different slots over separate `SSUBSCRIBE` call. For more information about sharded Pub/Sub, see [Sharded Pub/Sub](https://redis.io/topics/pubsub#sharded-pubsub). Return ------ When successful, this command doesn't return anything. Instead, for each shard channel, one message with the first element being the string "ssubscribe" is pushed as a confirmation that the command succeeded. Note that this command can also return a -MOVED redirect. Examples -------- ``` > ssubscribe orders Reading messages... (press Ctrl-C to quit) 1) "ssubscribe" 2) "orders" 3) (integer) 1 1) "smessage" 2) "orders" 3) "hello" ``` redis FT.ALIASADD FT.ALIASADD =========== ``` FT.ALIASADD ``` Syntax ``` FT.ALIASADD alias index ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.0.0](https://redis.io/docs/stack/search) Time complexity: O(1) Add an alias to an index [Examples](#examples) Required arguments ------------------ `alias index` is alias to be added to an index. Indexes can have more than one alias, but an alias cannot refer to another alias. FT.ALISSADD allows administrators to transparently redirect application queries to alternative indexes. Return ------ FT.ALIASADD returns a simple string reply `OK` if executed correctly, or an error reply otherwise. Examples -------- **Add an alias to an index** Add an alias to an index. ``` 127.0.0.1:6379> FT.ALIASADD alias idx OK ``` Attempting to add the same alias returns a message that the alias already exists. ``` 127.0.0.1:6379> FT.ALIASADD alias idx (error) Alias already exists ``` See also -------- [`FT.ALIASDEL`](../ft.aliasdel) | [`FT.ALIASUPDATE`](../ft.aliasupdate) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) redis CLUSTER CLUSTER ======= ``` CLUSTER DELSLOTSRANGE ``` Syntax ``` CLUSTER DELSLOTSRANGE start-slot end-slot [start-slot end-slot ...] ``` Available since: 7.0.0 Time complexity: O(N) where N is the total number of the slots between the start slot and end slot arguments. ACL categories: `@admin`, `@slow`, `@dangerous`, The `CLUSTER DELSLOTSRANGE` command is similar to the [`CLUSTER DELSLOTS`](../cluster-delslots) command in that they both remove hash slots from the node. The difference is that [`CLUSTER DELSLOTS`](../cluster-delslots) takes a list of hash slots to remove from the node, while `CLUSTER DELSLOTSRANGE` takes a list of slot ranges (specified by start and end slots) to remove from the node. Example ------- To remove slots 1 2 3 4 5 from the node, the [`CLUSTER DELSLOTS`](../cluster-delslots) command is: ``` > CLUSTER DELSLOTS 1 2 3 4 5 OK ``` The same operation can be completed with the following `CLUSTER DELSLOTSRANGE` command: ``` > CLUSTER DELSLOTSRANGE 1 5 OK ``` However, note that: 1. The command only works if all the specified slots are already associated with the node. 2. The command fails if the same slot is specified multiple times. 3. As a side effect of the command execution, the node may go into *down* state because not all hash slots are covered. Usage in Redis Cluster ---------------------- This command only works in cluster mode and may be useful for debugging and in order to manually orchestrate a cluster configuration when a new cluster is created. It is currently not used by `redis-cli`, and mainly exists for API completeness. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if the command was successful. Otherwise an error is returned. redis EVALSHA_RO EVALSHA\_RO =========== ``` EVALSHA_RO ``` Syntax ``` EVALSHA_RO sha1 numkeys [key [key ...]] [arg [arg ...]] ``` Available since: 7.0.0 Time complexity: Depends on the script that is executed. ACL categories: `@slow`, `@scripting`, This is a read-only variant of the [`EVALSHA`](../evalsha) command that cannot execute commands that modify data. For more information about when to use this command vs [`EVALSHA`](../evalsha), please refer to [Read-only scripts](https://redis.io/docs/manual/programmability/#read-only_scripts). For more information about [`EVALSHA`](../evalsha) scripts please refer to [Introduction to Eval Scripts](https://redis.io/topics/eval-intro). redis ZREM ZREM ==== ``` ZREM ``` Syntax ``` ZREM key member [member ...] ``` Available since: 1.2.0 Time complexity: O(M\*log(N)) with N being the number of elements in the sorted set and M the number of elements to be removed. ACL categories: `@write`, `@sortedset`, `@fast`, Removes the specified members from the sorted set stored at `key`. Non existing members are ignored. An error is returned when `key` exists and does not hold a sorted set. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: * The number of members removed from the sorted set, not including non existing members. Examples -------- ``` ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZREM myzset "two" ZRANGE myzset 0 -1 WITHSCORES ``` History ------- * Starting with Redis version 2.4.0: Accepts multiple elements. redis MULTI MULTI ===== ``` MULTI ``` Syntax ``` MULTI ``` Available since: 1.2.0 Time complexity: O(1) ACL categories: `@fast`, `@transaction`, Marks the start of a [transaction](https://redis.io/topics/transactions) block. Subsequent commands will be queued for atomic execution using [`EXEC`](../exec). Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): always `OK`. redis PUBSUB PUBSUB ====== ``` PUBSUB SHARDCHANNELS ``` Syntax ``` PUBSUB SHARDCHANNELS [pattern] ``` Available since: 7.0.0 Time complexity: O(N) where N is the number of active shard channels, and assuming constant time pattern matching (relatively short shard channels). ACL categories: `@pubsub`, `@slow`, Lists the currently *active shard channels*. An active shard channel is a Pub/Sub shard channel with one or more subscribers. If no `pattern` is specified, all the channels are listed, otherwise if pattern is specified only channels matching the specified glob-style pattern are listed. The information returned about the active shard channels are at the shard level and not at the cluster level. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a list of active channels, optionally matching the specified pattern. Examples -------- ``` > PUBSUB SHARDCHANNELS 1) "orders" PUBSUB SHARDCHANNELS o* 1) "orders" ``` redis PFSELFTEST PFSELFTEST ========== ``` PFSELFTEST ``` Syntax ``` PFSELFTEST ``` Available since: 2.8.9 Time complexity: N/A ACL categories: `@hyperloglog`, `@admin`, `@slow`, `@dangerous`, The `PFSELFTEST` command is an internal command. It is meant to be used for developing and testing Redis. redis SRANDMEMBER SRANDMEMBER =========== ``` SRANDMEMBER ``` Syntax ``` SRANDMEMBER key [count] ``` Available since: 1.0.0 Time complexity: Without the count argument O(1), otherwise O(N) where N is the absolute value of the passed count. ACL categories: `@read`, `@set`, `@slow`, When called with just the `key` argument, return a random element from the set value stored at `key`. If the provided `count` argument is positive, return an array of **distinct elements**. The array's length is either `count` or the set's cardinality ([`SCARD`](../scard)), whichever is lower. If called with a negative `count`, the behavior changes and the command is allowed to return the **same element multiple times**. In this case, the number of returned elements is the absolute value of the specified `count`. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): without the additional `count` argument, the command returns a Bulk Reply with the randomly selected element, or `nil` when `key` does not exist. [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): when the additional `count` argument is passed, the command returns an array of elements, or an empty array when `key` does not exist. Examples -------- ``` SADD myset one two three SRANDMEMBER myset SRANDMEMBER myset 2 SRANDMEMBER myset -5 ``` Specification of the behavior when count is passed -------------------------------------------------- When the `count` argument is a positive value this command behaves as follows: * No repeated elements are returned. * If `count` is bigger than the set's cardinality, the command will only return the whole set without additional elements. * The order of elements in the reply is not truly random, so it is up to the client to shuffle them if needed. When the `count` is a negative value, the behavior changes as follows: * Repeating elements are possible. * Exactly `count` elements, or an empty array if the set is empty (non-existing key), are always returned. * The order of elements in the reply is truly random. Distribution of returned elements --------------------------------- Note: this section is relevant only for Redis 5 or below, as Redis 6 implements a fairer algorithm. The distribution of the returned elements is far from perfect when the number of elements in the set is small, this is due to the fact that we used an approximated random element function that does not really guarantees good distribution. The algorithm used, that is implemented inside dict.c, samples the hash table buckets to find a non-empty one. Once a non empty bucket is found, since we use chaining in our hash table implementation, the number of elements inside the bucket is checked and a random element is selected. This means that if you have two non-empty buckets in the entire hash table, and one has three elements while one has just one, the element that is alone in its bucket will be returned with much higher probability. History ------- * Starting with Redis version 2.6.0: Added the optional `count` argument. redis GETSET GETSET ====== ``` GETSET (deprecated) ``` As of Redis version 6.2.0, this command is regarded as deprecated. It can be replaced by [`SET`](../set) with the `GET` argument when migrating or writing new code. Syntax ``` GETSET key value ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@write`, `@string`, `@fast`, Atomically sets `key` to `value` and returns the old value stored at `key`. Returns an error when `key` exists but does not hold a string value. Any previous time to live associated with the key is discarded on successful [`SET`](../set) operation. Design pattern -------------- `GETSET` can be used together with [`INCR`](../incr) for counting with atomic reset. For example: a process may call [`INCR`](../incr) against the key `mycounter` every time some event occurs, but from time to time we need to get the value of the counter and reset it to zero atomically. This can be done using `GETSET mycounter "0"`: ``` INCR mycounter GETSET mycounter "0" GET mycounter ``` Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the old value stored at `key`, or `nil` when `key` did not exist. Examples -------- ``` SET mykey "Hello" GETSET mykey "World" GET mykey ``` redis PUBSUB PUBSUB ====== ``` PUBSUB NUMPAT ``` Syntax ``` PUBSUB NUMPAT ``` Available since: 2.8.0 Time complexity: O(1) ACL categories: `@pubsub`, `@slow`, Returns the number of unique patterns that are subscribed to by clients (that are performed using the [`PSUBSCRIBE`](../psubscribe) command). Note that this isn't the count of clients subscribed to patterns, but the total number of unique patterns all the clients are subscribed to. Cluster note: in a Redis Cluster clients can subscribe to every node, and can also publish to every other node. The cluster will make sure that published messages are forwarded as needed. That said, [`PUBSUB`](../pubsub)'s replies in a cluster only report information from the node's Pub/Sub context, rather than the entire cluster. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of patterns all the clients are subscribed to.
programming_docs
redis JSON.ARRINSERT JSON.ARRINSERT ============== ``` JSON.ARRINSERT ``` Syntax ``` JSON.ARRINSERT key path index value [value ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: O(N) when path is evaluated to a single value where N is the size of the array, O(N) when path is evaluated to multiple values, where N is the size of the key Insert the `json` values into the array at `path` before the `index` (shifts to the right) [Examples](#examples) Required arguments ------------------ `key` is key to modify. `value` is one or more values to insert in one or more arrays. About using strings with JSON commands To specify a string as an array value to insert, wrap the quoted string with an additional set of single quotes. Example: `'"silver"'`. For more detailed use, see [Examples](#examples). `index` is position in the array where you want to insert a value. The index must be in the array's range. Inserting at `index` 0 prepends to the array. Negative index values start from the end of the array. Optional arguments ------------------ `path` is JSONPath to specify. Default is root `$`. Return value ------------ `JSON.ARRINSERT` returns an [array](https://redis.io/docs/reference/protocol-spec/#resp-arrays) of integer replies for each path, the array's new size, or `nil`, if the matching JSON value is not an array. For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Examples -------- **Add new colors to a specific place in a list of product colors** Create a document for noise-cancelling headphones in black and silver colors. ``` 127.0.0.1:6379> JSON.SET item:1 $ '{"name":"Noise-cancelling Bluetooth headphones","description":"Wireless Bluetooth headphones with noise-cancelling technology","connection":{"wireless":true,"type":"Bluetooth"},"price":99.98,"stock":25,"colors":["black","silver"]}' OK ``` Add color `blue` to the end of the `colors` array. `JSON.ARRAPEND` returns the array's new size. ``` 127.0.0.1:6379> JSON.ARRAPPEND item:1 $.colors '"blue"' 1) (integer) 3 ``` Return the new length of the `colors` array. ``` JSON.GET item:1 "{\"name\":\"Noise-cancelling Bluetooth headphones\",\"description\":\"Wireless Bluetooth headphones with noise-cancelling technology\",\"connection\":{\"wireless\":true,\"type\":\"Bluetooth\"},\"price\":99.98,\"stock\":25,\"colors\":[\"black\",\"silver\",\"blue\"]}" ``` Get the list of colors for the product. ``` 127.0.0.1:6379> JSON.GET item:1 '$.colors[\*]' "[\"black\",\"silver\",\"blue\"]" ``` Insert two more colors after the second color. You now have five colors. ``` 127.0.0.1:6379> JSON.ARRINSERT item:1 $.colors 2 '"yellow"' '"gold"' 1) (integer) 5 ``` Get the updated list of colors. ``` 127.0.0.1:6379> JSON.GET item:1 $.colors "[[\"black\",\"silver\",\"yellow\",\"gold\",\"blue\"]]" ``` See also -------- [`JSON.ARRAPPEND`](../json.arrappend) | [`JSON.ARRINDEX`](../json.arrindex) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json) redis RESET RESET ===== ``` RESET ``` Syntax ``` RESET ``` Available since: 6.2.0 Time complexity: O(1) ACL categories: `@fast`, `@connection`, This command performs a full reset of the connection's server-side context, mimicking the effect of disconnecting and reconnecting again. When the command is called from a regular client connection, it does the following: * Discards the current [`MULTI`](../multi) transaction block, if one exists. * Unwatches all keys [`WATCH`](../watch)ed by the connection. * Disables [`CLIENT TRACKING`](../client-tracking), if in use. * Sets the connection to [`READWRITE`](../readwrite) mode. * Cancels the connection's [`ASKING`](../asking) mode, if previously set. * Sets [`CLIENT REPLY`](../client-reply) to `ON`. * Sets the protocol version to RESP2. * [`SELECT`](../select)s database 0. * Exits [`MONITOR`](../monitor) mode, when applicable. * Aborts Pub/Sub's subscription state ([`SUBSCRIBE`](../subscribe) and [`PSUBSCRIBE`](../psubscribe)), when appropriate. * Deauthenticates the connection, requiring a call [`AUTH`](../auth) to reauthenticate when authentication is enabled. * Turns off `NO-EVICT` mode. * Turns off `NO-TOUCH` mode. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): always 'RESET'. redis JSON.RESP JSON.RESP ========= ``` JSON.RESP ``` Syntax ``` JSON.RESP key [path] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: O(N) when path is evaluated to a single value, where N is the size of the value, O(N) when path is evaluated to multiple values, where N is the size of the key Return the JSON in `key` in [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec) form [Examples](#examples) Required arguments ------------------ `key` is key to parse. Optional arguments ------------------ `path` is JSONPath to specify. Default is root `$`. This command uses the following mapping from JSON to RESP: * JSON `null` maps to the bulk string reply. * JSON `false` and `true` values map to the simple string reply. * JSON number maps to the integer reply or bulk string reply, depending on type. * JSON string maps to the bulk string reply. * JSON array is represented as an array reply in which the first element is the simple string reply `[`, followed by the array's elements. * JSON object is represented as an array reply in which the first element is the simple string reply `{`. Each successive entry represents a key-value pair as a two-entry array reply of the bulk string reply. For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Return ------ JSON.RESP returns an array reply specified as the JSON's RESP form detailed in [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Examples -------- **Return an array of RESP details about a document** Create a JSON document. ``` 127.0.0.1:6379> JSON.SET item:2 $ '{"name":"Wireless earbuds","description":"Wireless Bluetooth in-ear headphones","connection":{"wireless":true,"type":"Bluetooth"},"price":64.99,"stock":17,"colors":["black","white"], "max\_level":[80, 100, 120]}' OK ``` Get all RESP details about the document. ``` 127.0.0.1:6379> JSON.RESP item:2 1) { 2) "name" 3) "Wireless earbuds" 4) "description" 5) "Wireless Bluetooth in-ear headphones" 6) "connection" 7) 1) { 2) "wireless" 3) true 4) "type" 5) "Bluetooth" 8) "price" 9) "64.989999999999995" 10) "stock" 11) (integer) 17 12) "colors" 13) 1) [ 2) "black" 3) "white" 14) "max\_level" 15) 1) [ 2) (integer) 80 3) (integer) 100 4) (integer) 120 ``` See also -------- [`JSON.SET`](../json.set) | [`JSON.ARRLEN`](../json.arrlen) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json) redis HSET HSET ==== ``` HSET ``` Syntax ``` HSET key field value [field value ...] ``` Available since: 2.0.0 Time complexity: O(1) for each field/value pair added, so O(N) to add N field/value pairs when the command is called with multiple field/value pairs. ACL categories: `@write`, `@hash`, `@fast`, Sets the specified fields to their respective values in the hash stored at `key`. This command overwrites the values of specified fields that exist in the hash. If `key` doesn't exist, a new key holding a hash is created. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): The number of fields that were added. Examples -------- ``` HSET myhash field1 "Hello" HGET myhash field1 HSET myhash field2 "Hi" field3 "World" HGET myhash field2 HGET myhash field3 HGETALL myhash ``` History ------- * Starting with Redis version 4.0.0: Accepts multiple `field` and `value` arguments. redis OBJECT OBJECT ====== ``` OBJECT FREQ ``` Syntax ``` OBJECT FREQ key ``` Available since: 4.0.0 Time complexity: O(1) ACL categories: `@keyspace`, `@read`, `@slow`, This command returns the logarithmic access frequency counter of a Redis object stored at `<key>`. The command is only available when the `maxmemory-policy` configuration directive is set to one of the LFU policies. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) The counter's value. redis APPEND APPEND ====== ``` APPEND ``` Syntax ``` APPEND key value ``` Available since: 2.0.0 Time complexity: O(1). The amortized time complexity is O(1) assuming the appended value is small and the already present value is of any size, since the dynamic string library used by Redis will double the free space available on every reallocation. ACL categories: `@write`, `@string`, `@fast`, If `key` already exists and is a string, this command appends the `value` at the end of the string. If `key` does not exist it is created and set as an empty string, so `APPEND` will be similar to [`SET`](../set) in this special case. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the length of the string after the append operation. Examples -------- ``` EXISTS mykey APPEND mykey "Hello" APPEND mykey " World" GET mykey ``` Pattern: Time series -------------------- The `APPEND` command can be used to create a very compact representation of a list of fixed-size samples, usually referred as *time series*. Every time a new sample arrives we can store it using the command ``` APPEND timeseries "fixed-size sample" ``` Accessing individual elements in the time series is not hard: * [`STRLEN`](../strlen) can be used in order to obtain the number of samples. * [`GETRANGE`](../getrange) allows for random access of elements. If our time series have associated time information we can easily implement a binary search to get range combining [`GETRANGE`](../getrange) with the Lua scripting engine available in Redis 2.6. * [`SETRANGE`](../setrange) can be used to overwrite an existing time series. The limitation of this pattern is that we are forced into an append-only mode of operation, there is no way to cut the time series to a given size easily because Redis currently lacks a command able to trim string objects. However the space efficiency of time series stored in this way is remarkable. Hint: it is possible to switch to a different key based on the current Unix time, in this way it is possible to have just a relatively small amount of samples per key, to avoid dealing with very big keys, and to make this pattern more friendly to be distributed across many Redis instances. An example sampling the temperature of a sensor using fixed-size strings (using a binary format is better in real implementations). ``` APPEND ts "0043" APPEND ts "0035" GETRANGE ts 0 3 GETRANGE ts 4 7 ``` redis ZADD ZADD ==== ``` ZADD ``` Syntax ``` ZADD key [NX | XX] [GT | LT] [CH] [INCR] score member [score member ...] ``` Available since: 1.2.0 Time complexity: O(log(N)) for each item added, where N is the number of elements in the sorted set. ACL categories: `@write`, `@sortedset`, `@fast`, Adds all the specified members with the specified scores to the sorted set stored at `key`. It is possible to specify multiple score / member pairs. If a specified member is already a member of the sorted set, the score is updated and the element reinserted at the right position to ensure the correct ordering. If `key` does not exist, a new sorted set with the specified members as sole members is created, like if the sorted set was empty. If the key exists but does not hold a sorted set, an error is returned. The score values should be the string representation of a double precision floating point number. `+inf` and `-inf` values are valid values as well. ZADD options ------------ ZADD supports a list of options, specified after the name of the key and before the first score argument. Options are: * **XX**: Only update elements that already exist. Don't add new elements. * **NX**: Only add new elements. Don't update already existing elements. * **LT**: Only update existing elements if the new score is **less than** the current score. This flag doesn't prevent adding new elements. * **GT**: Only update existing elements if the new score is **greater than** the current score. This flag doesn't prevent adding new elements. * **CH**: Modify the return value from the number of new elements added, to the total number of elements changed (CH is an abbreviation of *changed*). Changed elements are **new elements added** and elements already existing for which **the score was updated**. So elements specified in the command line having the same score as they had in the past are not counted. Note: normally the return value of `ZADD` only counts the number of new elements added. * **INCR**: When this option is specified `ZADD` acts like [`ZINCRBY`](../zincrby). Only one score-element pair can be specified in this mode. Note: The **GT**, **LT** and **NX** options are mutually exclusive. Range of integer scores that can be expressed precisely ------------------------------------------------------- Redis sorted sets use a *double 64-bit floating point number* to represent the score. In all the architectures we support, this is represented as an **IEEE 754 floating point number**, that is able to represent precisely integer numbers between `-(2^53)` and `+(2^53)` included. In more practical terms, all the integers between -9007199254740992 and 9007199254740992 are perfectly representable. Larger integers, or fractions, are internally represented in exponential form, so it is possible that you get only an approximation of the decimal number, or of the very big integer, that you set as score. Sorted sets 101 --------------- Sorted sets are sorted by their score in an ascending way. The same element only exists a single time, no repeated elements are permitted. The score can be modified both by `ZADD` that will update the element score, and as a side effect, its position on the sorted set, and by [`ZINCRBY`](../zincrby) that can be used in order to update the score relatively to its previous value. The current score of an element can be retrieved using the [`ZSCORE`](../zscore) command, that can also be used to verify if an element already exists or not. For an introduction to sorted sets, see the data types page on [sorted sets](https://redis.io/topics/data-types#sorted-sets). Elements with the same score ---------------------------- While the same element can't be repeated in a sorted set since every element is unique, it is possible to add multiple different elements *having the same score*. When multiple elements have the same score, they are *ordered lexicographically* (they are still ordered by score as a first key, however, locally, all the elements with the same score are relatively ordered lexicographically). The lexicographic ordering used is binary, it compares strings as array of bytes. If the user inserts all the elements in a sorted set with the same score (for example 0), all the elements of the sorted set are sorted lexicographically, and range queries on elements are possible using the command [`ZRANGEBYLEX`](../zrangebylex) (Note: it is also possible to query sorted sets by range of scores using [`ZRANGEBYSCORE`](../zrangebyscore)). Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: * When used without optional arguments, the number of elements added to the sorted set (excluding score updates). * If the `CH` option is specified, the number of elements that were changed (added or updated). If the [`INCR`](../incr) option is specified, the return value will be [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): * The new score of `member` (a double precision floating point number) represented as string, or `nil` if the operation was aborted (when called with either the `XX` or the `NX` option). Examples -------- ``` ZADD myzset 1 "one" ZADD myzset 1 "uno" ZADD myzset 2 "two" 3 "three" ZRANGE myzset 0 -1 WITHSCORES ``` History ------- * Starting with Redis version 2.4.0: Accepts multiple elements. * Starting with Redis version 3.0.2: Added the `XX`, `NX`, `CH` and `INCR` options. * Starting with Redis version 6.2.0: Added the `GT` and `LT` options. redis FT.SUGDEL FT.SUGDEL ========= ``` FT.SUGDEL ``` Syntax ``` FT.SUGDEL key string ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.0.0](https://redis.io/docs/stack/search) Time complexity: O(1) Delete a string from a suggestion index [Examples](#examples) Required arguments ------------------ `key` is suggestion dictionary key. `string` is suggestion string to index. Return ------ FT.SUGDEL returns an integer reply, 1 if the string was found and deleted, 0 otherwise. Examples -------- **Delete a string from a suggestion index** ``` 127.0.0.1:6379> FT.SUGDEL sug "hello" (integer) 1 127.0.0.1:6379> FT.SUGDEL sug "hello" (integer) 0 ``` See also -------- [`FT.SUGGET`](../ft.sugget) | [`FT.SUGADD`](../ft.sugadd) | [`FT.SUGLEN`](../ft.suglen) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) redis ZRANGEBYSCORE ZRANGEBYSCORE ============= ``` ZRANGEBYSCORE (deprecated) ``` As of Redis version 6.2.0, this command is regarded as deprecated. It can be replaced by [`ZRANGE`](../zrange) with the `BYSCORE` argument when migrating or writing new code. Syntax ``` ZRANGEBYSCORE key min max [WITHSCORES] [LIMIT offset count] ``` Available since: 1.0.5 Time complexity: O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)). ACL categories: `@read`, `@sortedset`, `@slow`, Returns all the elements in the sorted set at `key` with a score between `min` and `max` (including elements with score equal to `min` or `max`). The elements are considered to be ordered from low to high scores. The elements having the same score are returned in lexicographical order (this follows from a property of the sorted set implementation in Redis and does not involve further computation). The optional `LIMIT` argument can be used to only get a range of the matching elements (similar to *SELECT LIMIT offset, count* in SQL). A negative `count` returns all elements from the `offset`. Keep in mind that if `offset` is large, the sorted set needs to be traversed for `offset` elements before getting to the elements to return, which can add up to O(N) time complexity. The optional `WITHSCORES` argument makes the command return both the element and its score, instead of the element alone. This option is available since Redis 2.0. Exclusive intervals and infinity -------------------------------- `min` and `max` can be `-inf` and `+inf`, so that you are not required to know the highest or lowest score in the sorted set to get all elements from or up to a certain score. By default, the interval specified by `min` and `max` is closed (inclusive). It is possible to specify an open interval (exclusive) by prefixing the score with the character `(`. For example: ``` ZRANGEBYSCORE zset (1 5 ``` Will return all elements with `1 < score <= 5` while: ``` ZRANGEBYSCORE zset (5 (10 ``` Will return all the elements with `5 < score < 10` (5 and 10 excluded). Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of elements in the specified score range (optionally with their scores). Examples -------- ``` ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZRANGEBYSCORE myzset -inf +inf ZRANGEBYSCORE myzset 1 2 ZRANGEBYSCORE myzset (1 2 ZRANGEBYSCORE myzset (1 (2 ``` Pattern: weighted random selection of an element ------------------------------------------------ Normally `ZRANGEBYSCORE` is simply used in order to get range of items where the score is the indexed integer key, however it is possible to do less obvious things with the command. For example a common problem when implementing Markov chains and other algorithms is to select an element at random from a set, but different elements may have different weights that change how likely it is they are picked. This is how we use this command in order to mount such an algorithm: Imagine you have elements A, B and C with weights 1, 2 and 3. You compute the sum of the weights, which is 1+2+3 = 6 At this point you add all the elements into a sorted set using this algorithm: ``` SUM = ELEMENTS.TOTAL_WEIGHT // 6 in this case. SCORE = 0 FOREACH ELE in ELEMENTS SCORE += ELE.weight / SUM ZADD KEY SCORE ELE END ``` This means that you set: ``` A to score 0.16 B to score .5 C to score 1 ``` Since this involves approximations, in order to avoid C is set to, like, 0.998 instead of 1, we just modify the above algorithm to make sure the last score is 1 (left as an exercise for the reader...). At this point, each time you want to get a weighted random element, just compute a random number between 0 and 1 (which is like calling `rand()` in most languages), so you can just do: ``` RANDOM_ELE = ZRANGEBYSCORE key RAND() +inf LIMIT 0 1 ``` History ------- * Starting with Redis version 2.0.0: Added the `WITHSCORES` modifier.
programming_docs
redis ACL ACL === ``` ACL LOAD ``` Syntax ``` ACL LOAD ``` Available since: 6.0.0 Time complexity: O(N). Where N is the number of configured users. ACL categories: `@admin`, `@slow`, `@dangerous`, When Redis is configured to use an ACL file (with the `aclfile` configuration option), this command will reload the ACLs from the file, replacing all the current ACL rules with the ones defined in the file. The command makes sure to have an *all or nothing* behavior, that is: * If every line in the file is valid, all the ACLs are loaded. * If one or more line in the file is not valid, nothing is loaded, and the old ACL rules defined in the server memory continue to be used. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` on success. The command may fail with an error for several reasons: if the file is not readable, if there is an error inside the file, and in such case the error will be reported to the user in the error. Finally the command will fail if the server is not configured to use an external ACL file. Examples -------- ``` > ACL LOAD +OK > ACL LOAD -ERR /tmp/foo:1: Unknown command or category name in ACL... ``` redis CLUSTER CLUSTER ======= ``` CLUSTER MEET ``` Syntax ``` CLUSTER MEET ip port [cluster-bus-port] ``` Available since: 3.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, `CLUSTER MEET` is used in order to connect different Redis nodes with cluster support enabled, into a working cluster. The basic idea is that nodes by default don't trust each other, and are considered unknown, so that it is unlikely that different cluster nodes will mix into a single one because of system administration errors or network addresses modifications. So in order for a given node to accept another one into the list of nodes composing a Redis Cluster, there are only two ways: 1. The system administrator sends a `CLUSTER MEET` command to force a node to meet another one. 2. An already known node sends a list of nodes in the gossip section that we are not aware of. If the receiving node trusts the sending node as a known node, it will process the gossip section and send a handshake to the nodes that are still not known. Note that Redis Cluster needs to form a full mesh (each node is connected with each other node), but in order to create a cluster, there is no need to send all the `CLUSTER MEET` commands needed to form the full mesh. What matter is to send enough `CLUSTER MEET` messages so that each node can reach each other node through a *chain of known nodes*. Thanks to the exchange of gossip information in heartbeat packets, the missing links will be created. So, if we link node A with node B via `CLUSTER MEET`, and B with C, A and C will find their ways to handshake and create a link. Another example: if we imagine a cluster formed of the following four nodes called A, B, C and D, we may send just the following set of commands to A: 1. `CLUSTER MEET B-ip B-port` 2. `CLUSTER MEET C-ip C-port` 3. `CLUSTER MEET D-ip D-port` As a side effect of `A` knowing and being known by all the other nodes, it will send gossip sections in the heartbeat packets that will allow each other node to create a link with each other one, forming a full mesh in a matter of seconds, even if the cluster is large. Moreover `CLUSTER MEET` does not need to be reciprocal. If I send the command to A in order to join B, I don't need to also send it to B in order to join A. If the optional `cluster_bus_port` argument is not provided, the default of port + 10000 will be used. Implementation details: MEET and PING packets --------------------------------------------- When a given node receives a `CLUSTER MEET` message, the node specified in the command still does not know the node we sent the command to. So in order for the node to force the receiver to accept it as a trusted node, it sends a `MEET` packet instead of a [`PING`](../ping) packet. The two packets have exactly the same format, but the former forces the receiver to acknowledge the node as trusted. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if the command was successful. If the address or port specified are invalid an error is returned. History ------- * Starting with Redis version 4.0.0: Added the optional `cluster_bus_port` argument. redis TS.INCRBY TS.INCRBY ========= ``` TS.INCRBY ``` Syntax ``` TS.INCRBY key value [TIMESTAMP timestamp] [RETENTION retentionPeriod] [UNCOMPRESSED] [CHUNK_SIZE size] [LABELS {label value}...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [TimeSeries 1.0.0](https://redis.io/docs/stack/timeseries) Time complexity: O(M) when M is the amount of compaction rules or O(1) with no compaction Increase the value of the sample with the maximum existing timestamp, or create a new sample with a value equal to the value of the sample with the maximum existing timestamp with a given increment [Examples](#examples) Required arguments ------------------ `key` is key name for the time series. `value` is numeric data value of the sample (double) **Notes** * When specified key does not exist, a new time series is created. * You can use this command as a counter or gauge that automatically gets history as a time series. * Explicitly adding samples to a compacted time series (using [`TS.ADD`](../ts.add), [`TS.MADD`](../ts.madd), `TS.INCRBY`, or [`TS.DECRBY`](../ts.decrby)) may result in inconsistencies between the raw and the compacted data. The compaction process may override such samples. Optional arguments ------------------ `TIMESTAMP timestamp` is (integer) UNIX sample timestamp in milliseconds or `*` to set the timestamp according to the server clock. `timestamp` must be equal to or higher than the maximum existing timestamp. When equal, the value of the sample with the maximum existing timestamp is increased. If it is higher, a new sample with a timestamp set to `timestamp` is created, and its value is set to the value of the sample with the maximum existing timestamp plus `value`. If the time series is empty, the value is set to `value`. When not specified, the timestamp is set according to the server clock. `RETENTION retentionPeriod` is maximum retention period, compared to the maximum existing timestamp, in milliseconds. Use it only if you are creating a new time series. It is ignored if you are adding samples to an existing time series. See `RETENTION` in [`TS.CREATE`](../ts.create). `UNCOMPRESSED` changes data storage from compressed (default) to uncompressed. Use it only if you are creating a new time series. It is ignored if you are adding samples to an existing time series. See `ENCODING` in [`TS.CREATE`](../ts.create). `CHUNK_SIZE size` is memory size, in bytes, allocated for each data chunk. Use it only if you are creating a new time series. It is ignored if you are adding samples to an existing time series. See `CHUNK_SIZE` in [`TS.CREATE`](../ts.create). `LABELS [{label value}...]` is set of label-value pairs that represent metadata labels of the key and serve as a secondary index. Use it only if you are creating a new time series. It is ignored if you are adding samples to an existing time series. See `LABELS` in [`TS.CREATE`](../ts.create). **Notes** * You can use this command to add data to a nonexisting time series in a single command. This is why `RETENTION`, `UNCOMPRESSED`, `CHUNK_SIZE`, and `LABELS` are optional arguments. * When specified and the key doesn't exist, a new time series is created. Setting the `RETENTION` and `LABELS` introduces additional time complexity. Return value ------------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - the timestamp of the upserted sample, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors). Examples -------- **Store sum of data from several sources** Suppose you are getting number of orders or total income per minute from several points of sale, and you want to store only the combined value. Call TS.INCRBY for each point-of-sale report. ``` 127.0.0.1:6379> TS.INCRBY a 232 TIMESTAMP 1657811829000 // point-of-sale #1 (integer) 1657811829000 127.0.0.1:6379> TS.INCRBY a 157 TIMESTAMP 1657811829000 // point-of-sale #2 (integer) 1657811829000 127.0.0.1:6379> TS.INCRBY a 432 TIMESTAMP 1657811829000 // point-of-sale #3 (integer) 1657811829000 ``` Note that the timestamps must arrive in non-decreasing order. ``` 127.0.0.1:6379> ts.incrby a 100 TIMESTAMP 50 (error) TSDB: timestamp must be equal to or higher than the maximum existing timestamp ``` You can achieve similar results without such protection using `TS.ADD key timestamp value ON_DUPLICATE sum`. **Count sensor captures** Supose a sensor ticks whenever a car is passed on a road, and you want to count occurrences. Whenever you get a tick from the sensor you can simply call: ``` 127.0.0.1:6379> TS.INCRBY a 1 (integer) 1658431553109 ``` The timestamp is filled automatically. See also -------- [`TS.DECRBY`](../ts.decrby) | [`TS.CREATE`](../ts.create) Related topics -------------- [RedisTimeSeries](https://redis.io/docs/stack/timeseries) redis JSON.CLEAR JSON.CLEAR ========== ``` JSON.CLEAR ``` Syntax ``` JSON.CLEAR key [path] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 2.0.0](https://redis.io/docs/stack/json) Time complexity: O(N) when path is evaluated to a single value where N is the size of the values, O(N) when path is evaluated to multiple values, where N is the size of the key Clear container values (arrays/objects) and set numeric values to `0` [Examples](#examples) Required arguments ------------------ `key` is key to parse. Optional arguments ------------------ `path` is JSONPath to specify. Default is root `$`. Nonexisting paths are ignored. Return ------ JSON.CLEAR returns an integer reply specified as the number of values cleared. For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Note Already cleared values are ignored for empty containers and zero numbers. Examples -------- **Clear container values and set numeric values to `0`** Create a JSON document. ``` 127.0.0.1:6379> JSON.SET doc $ '{"obj":{"a":1, "b":2}, "arr":[1,2,3], "str": "foo", "bool": true, "int": 42, "float": 3.14}' OK ``` Clear all container values. This returns the number of objects with cleared values. ``` 127.0.0.1:6379> JSON.CLEAR doc $.\* (integer) 4 ``` Get the updated document. Note that numeric values have been set to `0`. ``` 127.0.0.1:6379> JSON.GET doc $ "[{\"obj\":{},\"arr\":[],\"str\":\"foo\",\"bool\":true,\"int\":0,\"float\":0}]" ``` See also -------- [`JSON.ARRINDEX`](../json.arrindex) | [`JSON.ARRINSERT`](../json.arrinsert) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json) redis XGROUP XGROUP ====== ``` XGROUP CREATE ``` Syntax ``` XGROUP CREATE key group <id | $> [MKSTREAM] [ENTRIESREAD entries-read] ``` Available since: 5.0.0 Time complexity: O(1) ACL categories: `@write`, `@stream`, `@slow`, Create a new consumer group uniquely identified by `<groupname>` for the stream stored at `<key>` Every group has a unique name in a given stream. When a consumer group with the same name already exists, the command returns a `-BUSYGROUP` error. The command's `<id>` argument specifies the last delivered entry in the stream from the new group's perspective. The special ID `$` is the ID of the last entry in the stream, but you can substitute it with any valid ID. For example, if you want the group's consumers to fetch the entire stream from the beginning, use zero as the starting ID for the consumer group: ``` XGROUP CREATE mystream mygroup 0 ``` By default, the `XGROUP CREATE` command expects that the target stream exists, and returns an error when it doesn't. If a stream does not exist, you can create it automatically with length of 0 by using the optional `MKSTREAM` subcommand as the last argument after the `<id>`: ``` XGROUP CREATE mystream mygroup $ MKSTREAM ``` To enable consumer group lag tracking, specify the optional `entries_read` named argument with an arbitrary ID. An arbitrary ID is any ID that isn't the ID of the stream's first entry, last entry, or zero ("0-0") ID. Use it to find out how many entries are between the arbitrary ID (excluding it) and the stream's last entry. Set the `entries_read` the stream's `entries_added` subtracted by the number of entries. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` on success. History ------- * Starting with Redis version 7.0.0: Added the `entries_read` named argument. redis ZCARD ZCARD ===== ``` ZCARD ``` Syntax ``` ZCARD key ``` Available since: 1.2.0 Time complexity: O(1) ACL categories: `@read`, `@sortedset`, `@fast`, Returns the sorted set cardinality (number of elements) of the sorted set stored at `key`. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the cardinality (number of elements) of the sorted set, or `0` if `key` does not exist. Examples -------- ``` ZADD myzset 1 "one" ZADD myzset 2 "two" ZCARD myzset ``` redis FT.DICTDUMP FT.DICTDUMP =========== ``` FT.DICTDUMP ``` Syntax ``` FT.DICTDUMP dict ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.4.0](https://redis.io/docs/stack/search) Time complexity: O(N), where N is the size of the dictionary Dump all terms in the given dictionary [Examples](#examples) Required argumemts ------------------ `dict` is dictionary name. Return ------ FT.DICTDUMP returns an array, where each element is term (string). Examples -------- **Dump all terms in the dictionary** ``` 127.0.0.1:6379> FT.DICTDUMP dict 1) "foo" 2) "bar" 3) "hello world" ``` See also -------- [`FT.DICTADD`](../ft.dictadd) | [`FT.DICTDEL`](../ft.dictdel) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) redis SMEMBERS SMEMBERS ======== ``` SMEMBERS ``` Syntax ``` SMEMBERS key ``` Available since: 1.0.0 Time complexity: O(N) where N is the set cardinality. ACL categories: `@read`, `@set`, `@slow`, Returns all the members of the set value stored at `key`. This has the same effect as running [`SINTER`](../sinter) with one argument `key`. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): all elements of the set. Examples -------- ``` SADD myset "Hello" SADD myset "World" SMEMBERS myset ``` redis QUIT QUIT ==== ``` QUIT (deprecated) ``` As of Redis version 7.2.0, this command is regarded as deprecated. It can be replaced by just closing the connection when migrating or writing new code. Syntax ``` QUIT ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@fast`, `@connection`, Ask the server to close the connection. The connection is closed as soon as all pending replies have been written to the client. **Note:** Clients should not use this command. Instead, clients should simply close the connection when they're not used anymore. Terminating a connection on the client side is preferable, as it eliminates `TIME_WAIT` lingering sockets on the server side. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): always OK. redis CLUSTER CLUSTER ======= ``` CLUSTER SLAVES (deprecated) ``` As of Redis version 5.0.0, this command is regarded as deprecated. It can be replaced by [`CLUSTER REPLICAS`](../cluster-replicas) when migrating or writing new code. Syntax ``` CLUSTER SLAVES node-id ``` Available since: 3.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, **A note about the word slave used in this man page and command name**: starting with Redis version 5, if not for backward compatibility, the Redis project no longer uses the word slave. Please use the new command [`CLUSTER REPLICAS`](../cluster-replicas). The command `CLUSTER SLAVES` will continue to work for backward compatibility. The command provides a list of replica nodes replicating from the specified master node. The list is provided in the same format used by [`CLUSTER NODES`](../cluster-nodes) (please refer to its documentation for the specification of the format). The command will fail if the specified node is not known or if it is not a master according to the node table of the node receiving the command. Note that if a replica is added, moved, or removed from a given master node, and we ask `CLUSTER SLAVES` to a node that has not yet received the configuration update, it may show stale information. However eventually (in a matter of seconds if there are no network partitions) all the nodes will agree about the set of nodes associated with a given master. Return ------ The command returns data in the same format as [`CLUSTER NODES`](../cluster-nodes). redis CLUSTER CLUSTER ======= ``` CLUSTER MYID ``` Syntax ``` CLUSTER MYID ``` Available since: 3.0.0 Time complexity: O(1) ACL categories: `@slow`, Returns the node's id. The `CLUSTER MYID` command returns the unique, auto-generated identifier that is associated with the connected cluster node. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): The node id. redis JSON.OBJLEN JSON.OBJLEN =========== ``` JSON.OBJLEN ``` Syntax ``` JSON.OBJLEN key [path] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: O(1) when path is evaluated to a single value, O(N) when path is evaluated to multiple values, where N is the size of the key Report the number of keys in the JSON object at `path` in `key` [Examples](#examples) Required arguments ------------------ `key` is key to parse. Returns `null` for nonexistent keys. Optional arguments ------------------ `path` is JSONPath to specify. Default is root `$`. Returns `null` for nonexistant path. Return ------ JSON.OBJLEN returns an array of integer replies for each path specified as the number of keys in the object or `nil`, if the matching JSON value is not an object. For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Examples -------- ``` 127.0.0.1:6379> JSON.SET doc $ '{"a":[3], "nested": {"a": {"b":2, "c": 1}}}' OK 127.0.0.1:6379> JSON.OBJLEN doc $..a 1) (nil) 2) (integer) 2 ``` See also -------- [`JSON.ARRINDEX`](../json.arrindex) | [`JSON.ARRINSERT`](../json.arrinsert) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json) redis ACL ACL === ``` ACL LIST ``` Syntax ``` ACL LIST ``` Available since: 6.0.0 Time complexity: O(N). Where N is the number of configured users. ACL categories: `@admin`, `@slow`, `@dangerous`, The command shows the currently active ACL rules in the Redis server. Each line in the returned array defines a different user, and the format is the same used in the redis.conf file or the external ACL file, so you can cut and paste what is returned by the ACL LIST command directly inside a configuration file if you wish (but make sure to check [`ACL SAVE`](../acl-save)). Return ------ An array of strings. Examples -------- ``` > ACL LIST 1) "user antirez on #9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 ~objects:* &* +@all -@admin -@dangerous" 2) "user default on nopass ~* &* +@all" ```
programming_docs
redis RANDOMKEY RANDOMKEY ========= ``` RANDOMKEY ``` Syntax ``` RANDOMKEY ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@keyspace`, `@read`, `@slow`, Return a random key from the currently selected database. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the random key, or `nil` when the database is empty. redis ZREVRANGEBYSCORE ZREVRANGEBYSCORE ================ ``` ZREVRANGEBYSCORE (deprecated) ``` As of Redis version 6.2.0, this command is regarded as deprecated. It can be replaced by [`ZRANGE`](../zrange) with the `REV` and `BYSCORE` arguments when migrating or writing new code. Syntax ``` ZREVRANGEBYSCORE key max min [WITHSCORES] [LIMIT offset count] ``` Available since: 2.2.0 Time complexity: O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)). ACL categories: `@read`, `@sortedset`, `@slow`, Returns all the elements in the sorted set at `key` with a score between `max` and `min` (including elements with score equal to `max` or `min`). In contrary to the default ordering of sorted sets, for this command the elements are considered to be ordered from high to low scores. The elements having the same score are returned in reverse lexicographical order. Apart from the reversed ordering, `ZREVRANGEBYSCORE` is similar to [`ZRANGEBYSCORE`](../zrangebyscore). Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of elements in the specified score range (optionally with their scores). Examples -------- ``` ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZREVRANGEBYSCORE myzset +inf -inf ZREVRANGEBYSCORE myzset 2 1 ZREVRANGEBYSCORE myzset 2 (1 ZREVRANGEBYSCORE myzset (2 (1 ``` History ------- * Starting with Redis version 2.1.6: `min` and `max` can be exclusive. redis XGROUP XGROUP ====== ``` XGROUP CREATECONSUMER ``` Syntax ``` XGROUP CREATECONSUMER key group consumer ``` Available since: 6.2.0 Time complexity: O(1) ACL categories: `@write`, `@stream`, `@slow`, Create a consumer named `<consumername>` in the consumer group `<groupname>` of the stream that's stored at `<key>`. Consumers are also created automatically whenever an operation, such as [`XREADGROUP`](../xreadgroup), references a consumer that doesn't exist. This is valid for [`XREADGROUP`](../xreadgroup) only when there is data in the stream. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of created consumers (0 or 1) redis CLUSTER CLUSTER ======= ``` CLUSTER FORGET ``` Syntax ``` CLUSTER FORGET node-id ``` Available since: 3.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, The command is used in order to remove a node, specified via its node ID, from the set of *known nodes* of the Redis Cluster node receiving the command. In other words the specified node is removed from the *nodes table* of the node receiving the command. Because when a given node is part of the cluster, all the other nodes participating in the cluster knows about it, in order for a node to be completely removed from a cluster, the `CLUSTER FORGET` command must be sent to all the remaining nodes, regardless of the fact they are masters or replicas. However the command cannot simply drop the node from the internal node table of the node receiving the command, it also implements a ban-list, not allowing the same node to be added again as a side effect of processing the *gossip section* of the heartbeat packets received from other nodes. Details on why the ban-list is needed ------------------------------------- In the following example we'll show why the command must not just remove a given node from the nodes table, but also prevent it for being re-inserted again for some time. Let's assume we have four nodes, A, B, C and D. In order to end with just a three nodes cluster A, B, C we may follow these steps: 1. Reshard all the hash slots from D to nodes A, B, C. 2. D is now empty, but still listed in the nodes table of A, B and C. 3. We contact A, and send `CLUSTER FORGET D`. 4. B sends node A a heartbeat packet, where node D is listed. 5. A does no longer known node D (see step 3), so it starts a handshake with D. 6. D ends re-added in the nodes table of A. As you can see in this way removing a node is fragile, we need to send `CLUSTER FORGET` commands to all the nodes ASAP hoping there are no gossip sections processing in the meantime. Because of this problem the command implements a ban-list with an expire time for each entry. So what the command really does is: 1. The specified node gets removed from the nodes table. 2. The node ID of the removed node gets added to the ban-list, for 1 minute. 3. The node will skip all the node IDs listed in the ban-list when processing gossip sections received in heartbeat packets from other nodes. This way we have a 60 second window to inform all the nodes in the cluster that we want to remove a node. Special conditions not allowing the command execution ----------------------------------------------------- The command does not succeed and returns an error in the following cases: 1. The specified node ID is not found in the nodes table. 2. The node receiving the command is a replica, and the specified node ID identifies its current master. 3. The node ID identifies the same node we are sending the command to. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if the command was executed successfully, otherwise an error is returned. redis DECR DECR ==== ``` DECR ``` Syntax ``` DECR key ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@write`, `@string`, `@fast`, Decrements the number stored at `key` by one. If the key does not exist, it is set to `0` before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer. This operation is limited to **64 bit signed integers**. See [`INCR`](../incr) for extra information on increment/decrement operations. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the value of `key` after the decrement Examples -------- ``` SET mykey "10" DECR mykey SET mykey "234293482390480948029348230948" DECR mykey ``` redis CF.MEXISTS CF.MEXISTS ========== ``` CF.MEXISTS ``` Syntax ``` CF.MEXISTS key item [item ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(k \* n), where k is the number of sub-filters and n is the number of items Check if one or more `items` exists in a Cuckoo Filter `key` ### Parameters * **key**: The name of the filter * **items**: The item to check for Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - for each item where "1" value means the corresponding item may exist in the filter, and a "0" value means it does not exist in the filter. Examples -------- ``` redis> CF.MEXISTS cf item1 item_new 1) (integer) 1 2) (integer) 0 ``` redis XREVRANGE XREVRANGE ========= ``` XREVRANGE ``` Syntax ``` XREVRANGE key end start [COUNT count] ``` Available since: 5.0.0 Time complexity: O(N) with N being the number of elements returned. If N is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(1). ACL categories: `@read`, `@stream`, `@slow`, This command is exactly like [`XRANGE`](../xrange), but with the notable difference of returning the entries in reverse order, and also taking the start-end range in reverse order: in `XREVRANGE` you need to state the *end* ID and later the *start* ID, and the command will produce all the element between (or exactly like) the two IDs, starting from the *end* side. So for instance, to get all the elements from the higher ID to the lower ID one could use: ``` XREVRANGE somestream + - ``` Similarly to get just the last element added into the stream it is enough to send: ``` XREVRANGE somestream + - COUNT 1 ``` Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays), specifically: The command returns the entries with IDs matching the specified range, from the higher ID to the lower ID matching. The returned entries are complete, that means that the ID and all the fields they are composed are returned. Moreover the entries are returned with their fields and values in the exact same order as [`XADD`](../xadd) added them. Examples -------- ``` XADD writers * name Virginia surname Woolf XADD writers * name Jane surname Austen XADD writers * name Toni surname Morrison XADD writers * name Agatha surname Christie XADD writers * name Ngozi surname Adichie XLEN writers XREVRANGE writers + - COUNT 1 ``` History ------- * Starting with Redis version 6.2.0: Added exclusive ranges. redis BF.MEXISTS BF.MEXISTS ========== ``` BF.MEXISTS ``` Syntax ``` BF.MEXISTS key item [item ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(k \* n), where k is the number of hash functions and n is the number of items Determines if one or more items may exist in the filter or not. ### Parameters * **key**: The name of the filter * **items**: One or more items to check Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - for each item where "1" value means the corresponding item may exist in the filter, and a "0" value means it does not exist in the filter. Examples -------- ``` redis> BF.MEXISTS bf item1 item_new 1) (integer) 1 2) (integer) 0 ``` redis HEXISTS HEXISTS ======= ``` HEXISTS ``` Syntax ``` HEXISTS key field ``` Available since: 2.0.0 Time complexity: O(1) ACL categories: `@read`, `@hash`, `@fast`, Returns if `field` is an existing field in the hash stored at `key`. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: * `1` if the hash contains `field`. * `0` if the hash does not contain `field`, or `key` does not exist. Examples -------- ``` HSET myhash field1 "foo" HEXISTS myhash field1 HEXISTS myhash field2 ``` redis ZRANGESTORE ZRANGESTORE =========== ``` ZRANGESTORE ``` Syntax ``` ZRANGESTORE dst src min max [BYSCORE | BYLEX] [REV] [LIMIT offset count] ``` Available since: 6.2.0 Time complexity: O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements stored into the destination key. ACL categories: `@write`, `@sortedset`, `@slow`, This command is like [`ZRANGE`](../zrange), but stores the result in the `<dst>` destination key. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of elements in the resulting sorted set. Examples -------- ``` ZADD srczset 1 "one" 2 "two" 3 "three" 4 "four" ZRANGESTORE dstzset srczset 2 -1 ZRANGE dstzset 0 -1 ``` redis SETNX SETNX ===== ``` SETNX (deprecated) ``` As of Redis version 2.6.12, this command is regarded as deprecated. It can be replaced by [`SET`](../set) with the `NX` argument when migrating or writing new code. Syntax ``` SETNX key value ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@write`, `@string`, `@fast`, Set `key` to hold string `value` if `key` does not exist. In that case, it is equal to [`SET`](../set). When `key` already holds a value, no operation is performed. `SETNX` is short for "**SET** if **N**ot e**X**ists". Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: * `1` if the key was set * `0` if the key was not set Examples -------- ``` SETNX mykey "Hello" SETNX mykey "World" GET mykey ``` Design pattern: Locking with `SETNX` ------------------------------------ **Please note that:** 1. The following pattern is discouraged in favor of [the Redlock algorithm](https://redis.io/topics/distlock) which is only a bit more complex to implement, but offers better guarantees and is fault tolerant. 2. We document the old pattern anyway because certain existing implementations link to this page as a reference. Moreover it is an interesting example of how Redis commands can be used in order to mount programming primitives. 3. Anyway even assuming a single-instance locking primitive, starting with 2.6.12 it is possible to create a much simpler locking primitive, equivalent to the one discussed here, using the [`SET`](../set) command to acquire the lock, and a simple Lua script to release the lock. The pattern is documented in the [`SET`](../set) command page. That said, `SETNX` can be used, and was historically used, as a locking primitive. For example, to acquire the lock of the key `foo`, the client could try the following: ``` SETNX lock.foo <current Unix time + lock timeout + 1> ``` If `SETNX` returns `1` the client acquired the lock, setting the `lock.foo` key to the Unix time at which the lock should no longer be considered valid. The client will later use `DEL lock.foo` in order to release the lock. If `SETNX` returns `0` the key is already locked by some other client. We can either return to the caller if it's a non blocking lock, or enter a loop retrying to hold the lock until we succeed or some kind of timeout expires. ### Handling deadlocks In the above locking algorithm there is a problem: what happens if a client fails, crashes, or is otherwise not able to release the lock? It's possible to detect this condition because the lock key contains a UNIX timestamp. If such a timestamp is equal to the current Unix time the lock is no longer valid. When this happens we can't just call [`DEL`](../del) against the key to remove the lock and then try to issue a `SETNX`, as there is a race condition here, when multiple clients detected an expired lock and are trying to release it. * C1 and C2 read `lock.foo` to check the timestamp, because they both received `0` after executing `SETNX`, as the lock is still held by C3 that crashed after holding the lock. * C1 sends `DEL lock.foo` * C1 sends `SETNX lock.foo` and it succeeds * C2 sends `DEL lock.foo` * C2 sends `SETNX lock.foo` and it succeeds * **ERROR**: both C1 and C2 acquired the lock because of the race condition. Fortunately, it's possible to avoid this issue using the following algorithm. Let's see how C4, our sane client, uses the good algorithm: * C4 sends `SETNX lock.foo` in order to acquire the lock * The crashed client C3 still holds it, so Redis will reply with `0` to C4. * C4 sends `GET lock.foo` to check if the lock expired. If it is not, it will sleep for some time and retry from the start. * Instead, if the lock is expired because the Unix time at `lock.foo` is older than the current Unix time, C4 tries to perform: ``` GETSET lock.foo <current Unix timestamp + lock timeout + 1> ``` * Because of the [`GETSET`](../getset) semantic, C4 can check if the old value stored at `key` is still an expired timestamp. If it is, the lock was acquired. * If another client, for instance C5, was faster than C4 and acquired the lock with the [`GETSET`](../getset) operation, the C4 [`GETSET`](../getset) operation will return a non expired timestamp. C4 will simply restart from the first step. Note that even if C4 set the key a bit a few seconds in the future this is not a problem. In order to make this locking algorithm more robust, a client holding a lock should always check the timeout didn't expire before unlocking the key with [`DEL`](../del) because client failures can be complex, not just crashing but also blocking a lot of time against some operations and trying to issue [`DEL`](../del) after a lot of time (when the LOCK is already held by another client). redis GET GET === ``` GET ``` Syntax ``` GET key ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@read`, `@string`, `@fast`, Get the value of `key`. If the key does not exist the special value `nil` is returned. An error is returned if the value stored at `key` is not a string, because `GET` only handles string values. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the value of `key`, or `nil` when `key` does not exist. Examples -------- ``` GET nonexisting SET mykey "Hello" GET mykey ``` redis TS.RANGE TS.RANGE ======== ``` TS.RANGE ``` Syntax ``` TS.RANGE key fromTimestamp toTimestamp [LATEST] [FILTER_BY_TS ts...] [FILTER_BY_VALUE min max] [COUNT count] [[ALIGN align] AGGREGATION aggregator bucketDuration [BUCKETTIMESTAMP bt] [EMPTY]] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [TimeSeries 1.0.0](https://redis.io/docs/stack/timeseries) Time complexity: O(n/m+k) where n = Number of data points, m = Chunk size (data points per chunk), k = Number of data points that are in the requested range Query a range in forward direction [Examples](#examples) Required arguments ------------------ `key` is the key name for the time series. `fromTimestamp` is start timestamp for the range query (integer UNIX timestamp in milliseconds) or `-` to denote the timestamp of the earliest sample in the time series. `toTimestamp` is end timestamp for the range query (integer UNIX timestamp in milliseconds) or `+` to denote the timestamp of the latest sample in the time series. **Note:** When the time series is a compaction, the last compacted value may aggregate raw values with timestamp beyond `toTimestamp`. That is because `toTimestamp` only limits the timestamp of the compacted value, which is the start time of the raw bucket that was compacted. Optional arguments ------------------ `LATEST` (since RedisTimeSeries v1.8) is used when a time series is a compaction. With `LATEST`, TS.RANGE also reports the compacted value of the latest, possibly partial, bucket, given that this bucket's start time falls within `[fromTimestamp, toTimestamp]`. Without `LATEST`, TS.RANGE does not report the latest, possibly partial, bucket. When a time series is not a compaction, `LATEST` is ignored. The data in the latest bucket of a compaction is possibly partial. A bucket is *closed* and compacted only upon arrival of a new sample that *opens* a new *latest* bucket. There are cases, however, when the compacted value of the latest, possibly partial, bucket is also required. In such a case, use `LATEST`. `FILTER_BY_TS ts...` (since RedisTimeSeries v1.6) filters samples by a list of specific timestamps. A sample passes the filter if its exact timestamp is specified and falls within `[fromTimestamp, toTimestamp]`. `FILTER_BY_VALUE min max` (since RedisTimeSeries v1.6) filters samples by minimum and maximum values. `COUNT count` limits the number of returned samples. `ALIGN align` (since RedisTimeSeries v1.6) is a time bucket alignment control for `AGGREGATION`. It controls the time bucket timestamps by changing the reference timestamp on which a bucket is defined. `align` values include: * `start` or `-`: The reference timestamp will be the query start interval time (`fromTimestamp`) which can't be `-` * `end` or `+`: The reference timestamp will be the query end interval time (`toTimestamp`) which can't be `+` * A specific timestamp: align the reference timestamp to a specific time **Note:** When not provided, alignment is set to `0`. `AGGREGATION aggregator bucketDuration` aggregates samples into time buckets, where: * `aggregator` takes one of the following aggregation types: | `aggregator` | Description | | --- | --- | | `avg` | Arithmetic mean of all values | | `sum` | Sum of all values | | `min` | Minimum value | | `max` | Maximum value | | `range` | Difference between the maximum and the minimum value | | `count` | Number of values | | `first` | Value with lowest timestamp in the bucket | | `last` | Value with highest timestamp in the bucket | | `std.p` | Population standard deviation of the values | | `std.s` | Sample standard deviation of the values | | `var.p` | Population variance of the values | | `var.s` | Sample variance of the values | | `twa` | Time-weighted average over the bucket's timeframe (since RedisTimeSeries v1.8) | * `bucketDuration` is duration of each bucket, in milliseconds. Without `ALIGN`, bucket start times are multiples of `bucketDuration`. With `ALIGN align`, bucket start times are multiples of `bucketDuration` with remainder `align % bucketDuration`. The first bucket start time is less than or equal to `fromTimestamp`. `[BUCKETTIMESTAMP bt]` (since RedisTimeSeries v1.8) controls how bucket timestamps are reported. | `bt` | Timestamp reported for each bucket | | --- | --- | | `-` or `low` | the bucket's start time (default) | | `+` or `high` | the bucket's end time | | `~` or `mid` | the bucket's mid time (rounded down if not an integer) | `[EMPTY]` (since RedisTimeSeries v1.8) is a flag, which, when specified, reports aggregations also for empty buckets. | `aggregator` | Value reported for each empty bucket | | --- | --- | | `sum`, `count` | `0` | | `last` | The value of the last sample before the bucket's start. `NaN` when no such sample. | | `twa` | Average value over the bucket's timeframe based on linear interpolation of the last sample before the bucket's start and the first sample after the bucket's end. `NaN` when no such samples. | | `min`, `max`, `range`, `avg`, `first`, `std.p`, `std.s` | `NaN` | Regardless of the values of `fromTimestamp` and `toTimestamp`, no data is reported for buckets that end before the earliest sample or begin after the latest sample in the time series. Return value ------------ Either * [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of ([Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings)) pairs representing (timestamp, value(double)) * [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) (e.g., on invalid filter value) Complexity ---------- TS.RANGE complexity can be improved in the future by using binary search to find the start of the range, which makes this `O(Log(n/m)+k*m)`. But, because `m` is small, you can disregard it and look at the operation as `O(Log(n)+k)`. Examples -------- **Filter results by timestamp or sample value** Consider a metric where acceptable values are between -100 and 100, and the value 9999 is used as an indication of bad measurement. ``` 127.0.0.1:6379> TS.CREATE temp:TLV LABELS type temp location TLV OK 127.0.0.1:6379> TS.MADD temp:TLV 1000 30 temp:TLV 1010 35 temp:TLV 1020 9999 temp:TLV 1030 40 1) (integer) 1000 2) (integer) 1010 3) (integer) 1020 4) (integer) 1030 ``` Now, retrieve all values except out-of-range values. ``` TS.RANGE temp:TLV - + FILTER\_BY\_VALUE -100 100 1) 1) (integer) 1000 2) 30 2) 1) (integer) 1010 2) 35 3) 1) (integer) 1030 2) 40 ``` Now, retrieve the average value, while ignoring out-of-range values. ``` TS.RANGE temp:TLV - + FILTER\_BY\_VALUE -100 100 AGGREGATION avg 1000 1) 1) (integer) 1000 2) 35 ``` **Align aggregation buckets** To demonstrate alignment, let’s create a stock and add prices at nine different timestamps. ``` 127.0.0.1:6379> TS.CREATE stock:A LABELS type stock name A OK 127.0.0.1:6379> TS.MADD stock:A 1000 100 stock:A 1010 110 stock:A 1020 120 1) (integer) 1000 2) (integer) 1010 3) (integer) 1020 127.0.0.1:6379> TS.MADD stock:A 2000 200 stock:A 2010 210 stock:A 2020 220 1) (integer) 2000 2) (integer) 2010 3) (integer) 2020 127.0.0.1:6379> TS.MADD stock:A 3000 300 stock:A 3010 310 stock:A 3020 320 1) (integer) 3000 2) (integer) 3010 3) (integer) 3020 ``` Next, aggregate without using `ALIGN`, defaulting to alignment 0. ``` 127.0.0.1:6379> TS.RANGE stock:A - + AGGREGATION min 20 1) 1) (integer) 1000 2) 100 2) 1) (integer) 1020 2) 120 3) 1) (integer) 1040 2) 210 4) 1) (integer) 1060 2) 300 5) 1) (integer) 1080 2) 320 ``` And now set `ALIGN` to 10 to have a bucket start at time 10, and align all the buckets with a 20 milliseconds duration. ``` 127.0.0.1:6379> TS.RANGE stock:A - + ALIGN 10 AGGREGATION min 20 1) 1) (integer) 990 2) 100 2) 1) (integer) 1010 2) 110 3) 1) (integer) 1990 2) 200 4) 1) (integer) 2010 2) 210 5) 1) (integer) 2990 2) 300 6) 1) (integer) 3010 2) 310 ``` When the start timestamp for the range query is explicitly stated (not `-`), you can set `ALIGN` to that time by setting align to `-` or to `start`. ``` 127.0.0.1:6379> TS.RANGE stock:A 5 + ALIGN - AGGREGATION min 20 1) 1) (integer) 985 2) 100 2) 1) (integer) 1005 2) 110 3) 1) (integer) 1985 2) 200 4) 1) (integer) 2005 2) 210 5) 1) (integer) 2985 2) 300 6) 1) (integer) 3005 2) 310 ``` Similarly, when the end timestamp for the range query is explicitly stated, you can set `ALIGN` to that time by setting align to `+` or to `end`. See also -------- [`TS.MRANGE`](../ts.mrange) | [`TS.REVRANGE`](../ts.revrange) | [`TS.MREVRANGE`](../ts.mrevrange) Related topics -------------- [RedisTimeSeries](https://redis.io/docs/stack/timeseries)
programming_docs
redis LINDEX LINDEX ====== ``` LINDEX ``` Syntax ``` LINDEX key index ``` Available since: 1.0.0 Time complexity: O(N) where N is the number of elements to traverse to get to the element at index. This makes asking for the first or the last element of the list O(1). ACL categories: `@read`, `@list`, `@slow`, Returns the element at index `index` in the list stored at `key`. The index is zero-based, so `0` means the first element, `1` the second element and so on. Negative indices can be used to designate elements starting at the tail of the list. Here, `-1` means the last element, `-2` means the penultimate and so forth. When the value at `key` is not a list, an error is returned. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the requested element, or `nil` when `index` is out of range. Examples -------- ``` LPUSH mylist "World" LPUSH mylist "Hello" LINDEX mylist 0 LINDEX mylist -1 LINDEX mylist 3 ``` redis ZINTERCARD ZINTERCARD ========== ``` ZINTERCARD ``` Syntax ``` ZINTERCARD numkeys key [key ...] [LIMIT limit] ``` Available since: 7.0.0 Time complexity: O(N\*K) worst case with N being the smallest input sorted set, K being the number of input sorted sets. ACL categories: `@read`, `@sortedset`, `@slow`, This command is similar to [`ZINTER`](../zinter), but instead of returning the result set, it returns just the cardinality of the result. Keys that do not exist are considered to be empty sets. With one of the keys being an empty set, the resulting set is also empty (since set intersection with an empty set always results in an empty set). By default, the command calculates the cardinality of the intersection of all given sets. When provided with the optional `LIMIT` argument (which defaults to 0 and means unlimited), if the intersection cardinality reaches limit partway through the computation, the algorithm will exit and yield limit as the cardinality. Such implementation ensures a significant speedup for queries where the limit is lower than the actual intersection cardinality. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of elements in the resulting intersection. Examples -------- ``` ZADD zset1 1 "one" ZADD zset1 2 "two" ZADD zset2 1 "one" ZADD zset2 2 "two" ZADD zset2 3 "three" ZINTER 2 zset1 zset2 ZINTERCARD 2 zset1 zset2 ZINTERCARD 2 zset1 zset2 LIMIT 1 ``` redis TS.MGET TS.MGET ======= ``` TS.MGET ``` Syntax ``` TS.MGET [LATEST] [WITHLABELS | SELECTED_LABELS label...] FILTER filterExpr... ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [TimeSeries 1.0.0](https://redis.io/docs/stack/timeseries) Time complexity: O(n) where n is the number of time-series that match the filters Get the sample with the highest timestamp from each time series matching a specific filter [Examples](#examples) Required arguments ------------------ `FILTER filterExpr...` filters time series based on their labels and label values. Each filter expression has one of the following syntaxes: * `label=value`, where `label` equals `value` * `label!=value`, where `label` does not equal `value` * `label=`, where `key` does not have label `label` * `label!=`, where `key` has label `label` * `label=(value1,value2,...)`, where `key` with label `label` equals one of the values in the list * `label!=(value1,value2,...)` where key with label `label` does not equal any of the values in the list **NOTES:** * At least one `label=value` filter is required. * Filters are conjunctive. For example, the FILTER `type=temperature room=study` means the a time series is a temperature time series of a study room. * Don't use whitespaces in the filter expression. Optional arguments ------------------ `LATEST` (since RedisTimeSeries v1.8) is used when a time series is a compaction. With `LATEST`, TS.MGET also reports the compacted value of the latest possibly partial bucket, given that this bucket's start time falls within `[fromTimestamp, toTimestamp]`. Without `LATEST`, TS.MGET does not report the latest possibly partial bucket. When a time series is not a compaction, `LATEST` is ignored. The data in the latest bucket of a compaction is possibly partial. A bucket is *closed* and compacted only upon arrival of a new sample that *opens* a new *latest* bucket. There are cases, however, when the compacted value of the latest possibly partial bucket is also required. In such a case, use `LATEST`. `WITHLABELS` includes in the reply all label-value pairs representing metadata labels of the time series. If `WITHLABELS` or `SELECTED_LABELS` are not specified, by default, an empty list is reported as label-value pairs. `SELECTED_LABELS label...` (since RedisTimeSeries v1.6) returns a subset of the label-value pairs that represent metadata labels of the time series. Use when a large number of labels exists per series, but only the values of some of the labels are required. If `WITHLABELS` or `SELECTED_LABELS` are not specified, by default, an empty list is reported as label-value pairs. **Note:** The [`MGET`](../mget) command cannot be part of transaction when running on a Redis cluster. Return value ------------ * [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): for each time series matching the specified filters, the following is reported: + bulk-string-reply: The time series key name + [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): label-value pairs ([Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings), [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings)) - By default, an empty array is reported - If `WITHLABELS` is specified, all labels associated with this time series are reported - If `SELECTED_LABELS label...` is specified, the selected labels are reported (null value when no such label defined) + [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a single timestamp-value pair ([Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) (double)) Examples -------- **Select labels to retrieve** Create time series for temperature in Tel Aviv and Jerusalem, then add different temperature samples. ``` 127.0.0.1:6379> TS.CREATE temp:TLV LABELS type temp location TLV OK 127.0.0.1:6379> TS.CREATE temp:JLM LABELS type temp location JLM OK 127.0.0.1:6379> TS.MADD temp:TLV 1000 30 temp:TLV 1010 35 temp:TLV 1020 9999 temp:TLV 1030 40 1) (integer) 1000 2) (integer) 1010 3) (integer) 1020 4) (integer) 1030 127.0.0.1:6379> TS.MADD temp:JLM 1005 30 temp:JLM 1015 35 temp:JLM 1025 9999 temp:JLM 1035 40 1) (integer) 1005 2) (integer) 1015 3) (integer) 1025 4) (integer) 1035 ``` Get all the labels associated with the last sample. ``` 127.0.0.1:6379> TS.MGET WITHLABELS FILTER type=temp 1) 1) "temp:JLM" 2) 1) 1) "type" 2) "temp" 2) 1) "location" 2) "JLM" 3) 1) (integer) 1035 2) 40 2) 1) "temp:TLV" 2) 1) 1) "type" 2) "temp" 2) 1) "location" 2) "TLV" 3) 1) (integer) 1030 2) 40 ``` To get only the `location` label for each last sample, use `SELECTED_LABELS`. ``` 127.0.0.1:6379> TS.MGET SELECTED\_LABELS location FILTER type=temp 1) 1) "temp:JLM" 2) 1) 1) "location" 2) "JLM" 3) 1) (integer) 1035 2) 40 2) 1) "temp:TLV" 2) 1) 1) "location" 2) "TLV" 3) 1) (integer) 1030 2) 40 ``` See also -------- [`TS.MRANGE`](../ts.mrange) | [`TS.RANGE`](../ts.range) | [`TS.MREVRANGE`](../ts.mrevrange) | [`TS.REVRANGE`](../ts.revrange) Related topics -------------- [RedisTimeSeries](https://redis.io/docs/stack/timeseries) redis TDIGEST.QUANTILE TDIGEST.QUANTILE ================ ``` TDIGEST.QUANTILE ``` Syntax ``` TDIGEST.QUANTILE key quantile [quantile ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.4.0](https://redis.io/docs/stack/bloom) Time complexity: O(1) Returns, for each input fraction, an estimation of the value (floating point) that is smaller than the given fraction of observations. Multiple quantiles can be retrieved in a signle call. Required arguments ------------------ `key` is key name for an existing t-digest sketch. `quantile` is the input fraction (between 0 and 1 inclusively) Return value ------------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) - an array of estimates (floating-point) populated with value\_1, value\_2, ..., value\_N. * Return an accurate result when `quantile` is 0 (the value of the smallest observation) * Return an accurate result when `quantile` is 1 (the value of the largest observation) All values are 'nan' if the sketch is empty. Examples -------- ``` redis> TDIGEST.CREATE t COMPRESSION 1000 OK redis> TDIGEST.ADD t 1 2 2 3 3 3 4 4 4 4 5 5 5 5 5 OK redis> TDIGEST.QUANTILE t 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1) "1" 2) "2" 3) "3" 4) "3" 5) "4" 6) "4" 7) "4" 8) "5" 9) "5" 10) "5" 11) "5" ``` redis FT.SUGADD FT.SUGADD ========= ``` FT.SUGADD ``` Syntax ``` FT.SUGADD key string score [INCR] [PAYLOAD payload] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.0.0](https://redis.io/docs/stack/search) Time complexity: O(1) Add a suggestion string to an auto-complete suggestion dictionary [Examples](#examples) Required arguments ------------------ `key` is suggestion dictionary key. `string` is suggestion string to index. `score` is floating point number of the suggestion string's weight. The auto-complete suggestion dictionary is disconnected from the index definitions and leaves creating and updating suggestions dictionaries to the user. Optional arguments ------------------ `INCR` increments the existing entry of the suggestion by the given score, instead of replacing the score. This is useful for updating the dictionary based on user queries in real time. `PAYLOAD {payload}` saves an extra payload with the suggestion, that can be fetched by adding the `WITHPAYLOADS` argument to [`FT.SUGGET`](../ft.sugget). Return ------ FT.SUGADD returns an integer reply, which is the current size of the suggestion dictionary. Examples -------- **Add a suggestion string to an auto-complete suggestion dictionary** ``` 127.0.0.1:6379> FT.SUGADD sug "hello world" 1 (integer) 3 ``` See also -------- [`FT.SUGGET`](../ft.sugget) | [`FT.SUGDEL`](../ft.sugdel) | [`FT.SUGLEN`](../ft.suglen) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) History ------- * Starting with Redis version 2.0.0: Deprecated `PAYLOAD` argument redis CLUSTER CLUSTER ======= ``` CLUSTER COUNT-FAILURE-REPORTS ``` Syntax ``` CLUSTER COUNT-FAILURE-REPORTS node-id ``` Available since: 3.0.0 Time complexity: O(N) where N is the number of failure reports ACL categories: `@admin`, `@slow`, `@dangerous`, The command returns the number of *failure reports* for the specified node. Failure reports are the way Redis Cluster uses in order to promote a `PFAIL` state, that means a node is not reachable, to a `FAIL` state, that means that the majority of masters in the cluster agreed within a window of time that the node is not reachable. A few more details: * A node flags another node with `PFAIL` when the node is not reachable for a time greater than the configured *node timeout*, which is a fundamental configuration parameter of a Redis Cluster. * Nodes in `PFAIL` state are provided in gossip sections of heartbeat packets. * Every time a node processes gossip packets from other nodes, it creates (and refreshes the TTL if needed) **failure reports**, remembering that a given node said another given node is in `PFAIL` condition. * Each failure report has a time to live of two times the *node timeout* time. * If at a given time a node has another node flagged with `PFAIL`, and at the same time collected the majority of other master nodes *failure reports* about this node (including itself if it is a master), then it elevates the failure state of the node from `PFAIL` to `FAIL`, and broadcasts a message forcing all the nodes that can be reached to flag the node as `FAIL`. This command returns the number of failure reports for the current node which are currently not expired (so received within two times the *node timeout* time). The count does not include what the node we are asking this count believes about the node ID we pass as argument, the count *only* includes the failure reports the node received from other nodes. This command is mainly useful for debugging, when the failure detector of Redis Cluster is not operating as we believe it should. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of active failure reports for the node. redis CLUSTER CLUSTER ======= ``` CLUSTER SLOTS (deprecated) ``` As of Redis version 7.0.0, this command is regarded as deprecated. It can be replaced by [`CLUSTER SHARDS`](../cluster-shards) when migrating or writing new code. Syntax ``` CLUSTER SLOTS ``` Available since: 3.0.0 Time complexity: O(N) where N is the total number of Cluster nodes ACL categories: `@slow`, `CLUSTER SLOTS` returns details about which cluster slots map to which Redis instances. The command is suitable to be used by Redis Cluster client libraries implementations in order to retrieve (or update when a redirection is received) the map associating cluster *hash slots* with actual nodes network information, so that when a command is received, it can be sent to what is likely the right instance for the keys specified in the command. The networking information for each node is an array containing the following elements: * Preferred endpoint (Either an IP address, hostname, or NULL) * Port number * The node ID * A map of additional networking metadata The preferred endpoint, along with the port, defines the location that clients should use to send requests for a given slot. A NULL value for the endpoint indicates the node has an unknown endpoint and the client should connect to the same endpoint it used to send the `CLUSTER SLOTS` command but with the port returned from the command. This unknown endpoint configuration is useful when the Redis nodes are behind a load balancer that Redis doesn't know the endpoint of. Which endpoint is set as preferred is determined by the `cluster-preferred-endpoint-type` config. Additional networking metadata is provided as a map on the fourth argument for each node. The following networking metadata may be returned: * IP: When the preferred endpoint is not set to IP. * Hostname: When a node has an announced hostname but the primary endpoint is not set to hostname. Nested Result Array ------------------- Each nested result is: * Start slot range * End slot range * Master for slot range represented as nested networking information * First replica of master for slot range * Second replica * ...continues until all replicas for this master are returned. Each result includes all active replicas of the master instance for the listed slot range. Failed replicas are not returned. The third nested reply is guaranteed to be the networking information of the master instance for the slot range. All networking information after the third nested reply are replicas of the master. If a cluster instance has non-contiguous slots (e.g. 1-400,900,1800-6000) then master and replica networking information results will be duplicated for each top-level slot range reply. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): nested list of slot ranges with networking information. Examples -------- ``` > CLUSTER SLOTS 1) 1) (integer) 0 2) (integer) 5460 3) 1) "127.0.0.1" 2) (integer) 30001 3) "09dbe9720cda62f7865eabc5fd8857c5d2678366" 4) 1) hostname 2) "host-1.redis.example.com" 4) 1) "127.0.0.1" 2) (integer) 30004 3) "821d8ca00d7ccf931ed3ffc7e3db0599d2271abf" 4) 1) hostname 2) "host-2.redis.example.com" 2) 1) (integer) 5461 2) (integer) 10922 3) 1) "127.0.0.1" 2) (integer) 30002 3) "c9d93d9f2c0c524ff34cc11838c2003d8c29e013" 4) 1) hostname 2) "host-3.redis.example.com" 4) 1) "127.0.0.1" 2) (integer) 30005 3) "faadb3eb99009de4ab72ad6b6ed87634c7ee410f" 4) 1) hostname 2) "host-4.redis.example.com" 3) 1) (integer) 10923 2) (integer) 16383 3) 1) "127.0.0.1" 2) (integer) 30003 3) "044ec91f325b7595e76dbcb18cc688b6a5b434a1" 4) 1) hostname 2) "host-5.redis.example.com" 4) 1) "127.0.0.1" 2) (integer) 30006 3) "58e6e48d41228013e5d9c1c37c5060693925e97e" 4) 1) hostname 2) "host-6.redis.example.com" ``` **Warning:** In future versions there could be more elements describing the node better. In general a client implementation should just rely on the fact that certain parameters are at fixed positions as specified, but more parameters may follow and should be ignored. Similarly a client library should try if possible to cope with the fact that older versions may just have the primary endpoint and port parameter. Behavior change history ----------------------- * `>= 7.0.0`: Added support for hostnames and unknown endpoints in first field of node response. History ------- * Starting with Redis version 4.0.0: Added node IDs. * Starting with Redis version 7.0.0: Added additional networking metadata field. redis SINTERSTORE SINTERSTORE =========== ``` SINTERSTORE ``` Syntax ``` SINTERSTORE destination key [key ...] ``` Available since: 1.0.0 Time complexity: O(N\*M) worst case where N is the cardinality of the smallest set and M is the number of sets. ACL categories: `@write`, `@set`, `@slow`, This command is equal to [`SINTER`](../sinter), but instead of returning the resulting set, it is stored in `destination`. If `destination` already exists, it is overwritten. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of elements in the resulting set. Examples -------- ``` SADD key1 "a" SADD key1 "b" SADD key1 "c" SADD key2 "c" SADD key2 "d" SADD key2 "e" SINTERSTORE key key1 key2 SMEMBERS key ``` redis COMMAND COMMAND ======= ``` COMMAND DOCS ``` Syntax ``` COMMAND DOCS [command-name [command-name ...]] ``` Available since: 7.0.0 Time complexity: O(N) where N is the number of commands to look up ACL categories: `@slow`, `@connection`, Return documentary information about commands. By default, the reply includes all of the server's commands. You can use the optional *command-name* argument to specify the names of one or more commands. The reply includes a map for each returned command. The following keys may be included in the mapped reply: * **summary:** short command description. * **since:** the Redis version that added the command (or for module commands, the module version). * **group:** the functional group to which the command belongs. Possible values are: + *bitmap* + *cluster* + *connection* + *generic* + *geo* + *hash* + *hyperloglog* + *list* + *module* + *pubsub* + *scripting* + *sentinel* + *server* + *set* + *sorted-set* + *stream* + *string* + *transactions* * **complexity:** a short explanation about the command's time complexity. * **doc\_flags:** an array of documentation flags. Possible values are: + *deprecated:* the command is deprecated. + *syscmd:* a system command that isn't meant to be called by users. * **deprecated\_since:** the Redis version that deprecated the command (or for module commands, the module version).. * **replaced\_by:** the alternative for a deprecated command. * **history:** an array of historical notes describing changes to the command's behavior or arguments. Each entry is an array itself, made up of two elements: 1. The Redis version that the entry applies to. 2. The description of the change. * **arguments:** an array of maps that describe the command's arguments. Please refer to the [Redis command arguments](https://redis.io/topics/command-arguments) page for more information. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a map as a flattened array as described above. Examples -------- ``` COMMAND DOCS SET ```
programming_docs
redis BLMOVE BLMOVE ====== ``` BLMOVE ``` Syntax ``` BLMOVE source destination <LEFT | RIGHT> <LEFT | RIGHT> timeout ``` Available since: 6.2.0 Time complexity: O(1) ACL categories: `@write`, `@list`, `@slow`, `@blocking`, `BLMOVE` is the blocking variant of [`LMOVE`](../lmove). When `source` contains elements, this command behaves exactly like [`LMOVE`](../lmove). When used inside a [`MULTI`](../multi)/[`EXEC`](../exec) block, this command behaves exactly like [`LMOVE`](../lmove). When `source` is empty, Redis will block the connection until another client pushes to it or until `timeout` (a double value specifying the maximum number of seconds to block) is reached. A `timeout` of zero can be used to block indefinitely. This command comes in place of the now deprecated [`BRPOPLPUSH`](../brpoplpush). Doing `BLMOVE RIGHT LEFT` is equivalent. See [`LMOVE`](../lmove) for more information. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the element being popped from `source` and pushed to `destination`. If `timeout` is reached, a [Null reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings) is returned. Pattern: Reliable queue ----------------------- Please see the pattern description in the [`LMOVE`](../lmove) documentation. Pattern: Circular list ---------------------- Please see the pattern description in the [`LMOVE`](../lmove) documentation. redis GRAPH.SLOWLOG GRAPH.SLOWLOG ============= ``` GRAPH.SLOWLOG ``` Syntax ``` GRAPH.SLOWLOG graph ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Graph 2.0.12](https://redis.io/docs/stack/graph) Time complexity: Returns a list containing up to 10 of the slowest queries issued against the given graph ID. Each item in the list has the following structure: 1. A Unix timestamp at which the log entry was processed. 2. The issued command. 3. The issued query. 4. The amount of time needed for its execution, in milliseconds. ``` GRAPH.SLOWLOG graph\_id 1) 1) "1581932396" 2) "GRAPH.QUERY" 3) "MATCH (a:Person)-[:FRIEND]->(e) RETURN e.name" 4) "0.831" 2) 1) "1581932396" 2) "GRAPH.QUERY" 3) "MATCH (me:Person)-[:FRIEND]->(:Person)-[:FRIEND]->(fof:Person) RETURN fof.name" 4) "0.288" ``` To reset a graph's slowlog issue the following command: ``` GRAPH.SLOWLOG graph\_id RESET ``` Once cleared the information is lost forever. redis ZCOUNT ZCOUNT ====== ``` ZCOUNT ``` Syntax ``` ZCOUNT key min max ``` Available since: 2.0.0 Time complexity: O(log(N)) with N being the number of elements in the sorted set. ACL categories: `@read`, `@sortedset`, `@fast`, Returns the number of elements in the sorted set at `key` with a score between `min` and `max`. The `min` and `max` arguments have the same semantic as described for [`ZRANGEBYSCORE`](../zrangebyscore). Note: the command has a complexity of just O(log(N)) because it uses elements ranks (see [`ZRANK`](../zrank)) to get an idea of the range. Because of this there is no need to do a work proportional to the size of the range. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of elements in the specified score range. Examples -------- ``` ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZCOUNT myzset -inf +inf ZCOUNT myzset (1 3 ``` redis TS.MRANGE TS.MRANGE ========= ``` TS.MRANGE ``` Syntax ``` TS.MRANGE fromTimestamp toTimestamp [LATEST] [FILTER_BY_TS ts...] [FILTER_BY_VALUE min max] [WITHLABELS | SELECTED_LABELS label...] [COUNT count] [[ALIGN align] AGGREGATION aggregator bucketDuration [BUCKETTIMESTAMP bt] [EMPTY]] FILTER filterExpr... [GROUPBY label REDUCE reducer] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [TimeSeries 1.0.0](https://redis.io/docs/stack/timeseries) Time complexity: O(n/m+k) where n = Number of data points, m = Chunk size (data points per chunk), k = Number of data points that are in the requested ranges Query a range across multiple time series by filters in forward direction [Examples](#examples) Required arguments ------------------ `fromTimestamp` is start timestamp for the range query (integer UNIX timestamp in milliseconds) or `-` to denote the timestamp of the earliest sample amongs all time series that passes `FILTER filterExpr...`. `toTimestamp` is end timestamp for the range query (integer UNIX timestamp in milliseconds) or `+` to denote the timestamp of the latest sample amongs all time series that passes `FILTER filterExpr...`. `FILTER filterExpr...` filters time series based on their labels and label values. Each filter expression has one of the following syntaxes: * `label=value`, where `label` equals `value` * `label!=value`, where `label` does not equal `value` * `label=`, where `key` does not have label `label` * `label!=`, where `key` has label `label` * `label=(value1,value2,...)`, where `key` with label `label` equals one of the values in the list * `label!=(value1,value2,...)`, where key with label `label` does not equal any of the values in the list **Notes:** * At least one `label=value` filter is required. * Filters are conjunctive. For example, the FILTER `type=temperature room=study` means the a time series is a temperature time series of a study room. * Don't use whitespaces in the filter expression. Optional arguments ------------------ `LATEST` (since RedisTimeSeries v1.8) is used when a time series is a compaction. With `LATEST`, TS.MRANGE also reports the compacted value of the latest possibly partial bucket, given that this bucket's start time falls within `[fromTimestamp, toTimestamp]`. Without `LATEST`, TS.MRANGE does not report the latest possibly partial bucket. When a time series is not a compaction, `LATEST` is ignored. The data in the latest bucket of a compaction is possibly partial. A bucket is *closed* and compacted only upon arrival of a new sample that *opens* a new *latest* bucket. There are cases, however, when the compacted value of the latest possibly partial bucket is also required. In such a case, use `LATEST`. `FILTER_BY_TS ts...` (since RedisTimeSeries v1.6) filters samples by a list of specific timestamps. A sample passes the filter if its exact timestamp is specified and falls within `[fromTimestamp, toTimestamp]`. `FILTER_BY_VALUE min max` (since RedisTimeSeries v1.6) filters samples by minimum and maximum values. `WITHLABELS` includes in the reply all label-value pairs representing metadata labels of the time series. If `WITHLABELS` or `SELECTED_LABELS` are not specified, by default, an empty list is reported as label-value pairs. `SELECTED_LABELS label...` (since RedisTimeSeries v1.6) returns a subset of the label-value pairs that represent metadata labels of the time series. Use when a large number of labels exists per series, but only the values of some of the labels are required. If `WITHLABELS` or `SELECTED_LABELS` are not specified, by default, an empty list is reported as label-value pairs. `COUNT count` limits the number of returned samples. `ALIGN align` (since RedisTimeSeries v1.6) is a time bucket alignment control for `AGGREGATION`. It controls the time bucket timestamps by changing the reference timestamp on which a bucket is defined. Values include: * `start` or `-`: The reference timestamp will be the query start interval time (`fromTimestamp`) which can't be `-` * `end` or `+`: The reference timestamp will be the query end interval time (`toTimestamp`) which can't be `+` * A specific timestamp: align the reference timestamp to a specific time **Note:** When not provided, alignment is set to `0`. `AGGREGATION aggregator bucketDuration` per time series, aggregates samples into time buckets, where: * `aggregator` takes one of the following aggregation types: | `aggregator` | Description | | --- | --- | | `avg` | Arithmetic mean of all values | | `sum` | Sum of all values | | `min` | Minimum value | | `max` | Maximum value | | `range` | Difference between maximum value and minimum value | | `count` | Number of values | | `first` | Value with lowest timestamp in the bucket | | `last` | Value with highest timestamp in the bucket | | `std.p` | Population standard deviation of the values | | `std.s` | Sample standard deviation of the values | | `var.p` | Population variance of the values | | `var.s` | Sample variance of the values | | `twa` | Time-weighted average over the bucket's timeframe (since RedisTimeSeries v1.8) | * `bucketDuration` is duration of each bucket, in milliseconds. Without `ALIGN`, bucket start times are multiples of `bucketDuration`. With `ALIGN align`, bucket start times are multiples of `bucketDuration` with remainder `align % bucketDuration`. The first bucket start time is less than or equal to `fromTimestamp`. `[BUCKETTIMESTAMP bt]` (since RedisTimeSeries v1.8) controls how bucket timestamps are reported. | `bt` | Timestamp reported for each bucket | | --- | --- | | `-` or `low` | the bucket's start time (default) | | `+` or `high` | the bucket's end time | | `~` or `mid` | the bucket's mid time (rounded down if not an integer) | `[EMPTY]` (since RedisTimeSeries v1.8) is a flag, which, when specified, reports aggregations also for empty buckets. | `aggregator` | Value reported for each empty bucket | | --- | --- | | `sum`, `count` | `0` | | `last` | The value of the last sample before the bucket's start. `NaN` when no such sample. | | `twa` | Average value over the bucket's timeframe based on linear interpolation of the last sample before the bucket's start and the first sample after the bucket's end. `NaN` when no such samples. | | `min`, `max`, `range`, `avg`, `first`, `std.p`, `std.s` | `NaN` | Regardless of the values of `fromTimestamp` and `toTimestamp`, no data is reported for buckets that end before the earliest sample or begin after the latest sample in the time series. `GROUPBY label REDUCE reducer` (since RedisTimeSeries v1.6) splits time series into groups, each group contains time series that share the same value for the provided label name, then aggregates results in each group. When combined with `AGGREGATION` the `GROUPBY`/`REDUCE` is applied post aggregation stage. * `label` is label name. A group is created for all time series that share the same value for this label. * `reducer` is an aggregation type used to aggregate the results in each group. | `reducer` | Description | | --- | --- | | `avg` | Arithmetic mean of all non-NaN values (since RedisTimeSeries v1.8) | | `sum` | Sum of all non-NaN values | | `min` | Minimum non-NaN value | | `max` | Maximum non-NaN value | | `range` | Difference between maximum non-NaN value and minimum non-NaN value (since RedisTimeSeries v1.8) | | `count` | Number of non-NaN values (since RedisTimeSeries v1.8) | | `std.p` | Population standard deviation of all non-NaN values (since RedisTimeSeries v1.8) | | `std.s` | Sample standard deviation of all non-NaN values (since RedisTimeSeries v1.8) | | `var.p` | Population variance of all non-NaN values (since RedisTimeSeries v1.8) | | `var.s` | Sample variance of all non-NaN values (since RedisTimeSeries v1.8) | **Notes:** * The produced time series is named `<label>=<value>` * The produced time series contains two labels with these label array structures: + `__reducer__`, the reducer used (e.g., `"count"`) + `__source__`, the list of time series keys used to compute the grouped series (e.g., `"key1,key2,key3"`) **Note:** An `MRANGE` command cannot be part of a transaction when running on a Redis cluster. Return value ------------ If `GROUPBY label REDUCE reducer` is not specified: * [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): for each time series matching the specified filters, the following is reported: + bulk-string-reply: The time series key name + [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): label-value pairs ([Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings), [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings)) - By default, an empty array is reported - If `WITHLABELS` is specified, all labels associated with this time series are reported - If `SELECTED_LABELS label...` is specified, the selected labels are reported (null value when no such label defined) + [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): timestamp-value pairs ([Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) (double)): all samples/aggregations matching the range If `GROUPBY label REDUCE reducer` is specified: * [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): for each group of time series matching the specified filters, the following is reported: + bulk-string-reply with the format `label=value` where `label` is the `GROUPBY` label argument + [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): label-value pairs ([Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings), [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings)): - By default, an empty array is reported - If `WITHLABELS` is specified, the `GROUPBY` label argument and value are reported - If `SELECTED_LABELS label...` is specified, the selected labels are reported (null value when no such label defined or label does not have the same value for all grouped time series) + [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): either a single pair ([Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings), [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings)): the `GROUPBY` label argument and value, or empty array if + [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a single pair ([Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings), [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings)): the string `__reducer__` and the reducer argument + [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a single pair ([Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings), [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings)): the string `__source__` and the time series key names separated by `,` + [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): timestamp-value pairs ([Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) (double)): all samples/aggregations matching the range Examples -------- **Retrieve maximum stock price per timestamp** Create two stocks and add their prices at three different timestamps. ``` 127.0.0.1:6379> TS.CREATE stock:A LABELS type stock name A OK 127.0.0.1:6379> TS.CREATE stock:B LABELS type stock name B OK 127.0.0.1:6379> TS.MADD stock:A 1000 100 stock:A 1010 110 stock:A 1020 120 1) (integer) 1000 2) (integer) 1010 3) (integer) 1020 127.0.0.1:6379> TS.MADD stock:B 1000 120 stock:B 1010 110 stock:B 1020 100 1) (integer) 1000 2) (integer) 1010 3) (integer) 1020 ``` You can now retrieve the maximum stock price per timestamp. ``` 127.0.0.1:6379> TS.MRANGE - + WITHLABELS FILTER type=stock GROUPBY type REDUCE max 1) 1) "type=stock" 2) 1) 1) "type" 2) "stock" 2) 1) "\_\_reducer\_\_" 2) "max" 3) 1) "\_\_source\_\_" 2) "stock:A,stock:B" 3) 1) 1) (integer) 1000 2) 120 2) 1) (integer) 1010 2) 110 3) 1) (integer) 1020 2) 120 ``` The `FILTER type=stock` clause returns a single time series representing stock prices. The `GROUPBY type REDUCE max` clause splits the time series into groups with identical type values, and then, for each timestamp, aggregates all series that share the same type value using the max aggregator. **Calculate average stock price and retrieve maximum average** Create two stocks and add their prices at nine different timestamps. ``` 127.0.0.1:6379> TS.CREATE stock:A LABELS type stock name A OK 127.0.0.1:6379> TS.CREATE stock:B LABELS type stock name B OK 127.0.0.1:6379> TS.MADD stock:A 1000 100 stock:A 1010 110 stock:A 1020 120 1) (integer) 1000 2) (integer) 1010 3) (integer) 1020 127.0.0.1:6379> TS.MADD stock:B 1000 120 stock:B 1010 110 stock:B 1020 100 1) (integer) 1000 2) (integer) 1010 3) (integer) 1020 127.0.0.1:6379> TS.MADD stock:A 2000 200 stock:A 2010 210 stock:A 2020 220 1) (integer) 2000 2) (integer) 2010 3) (integer) 2020 127.0.0.1:6379> TS.MADD stock:B 2000 220 stock:B 2010 210 stock:B 2020 200 1) (integer) 2000 2) (integer) 2010 3) (integer) 2020 127.0.0.1:6379> TS.MADD stock:A 3000 300 stock:A 3010 310 stock:A 3020 320 1) (integer) 3000 2) (integer) 3010 3) (integer) 3020 127.0.0.1:6379> TS.MADD stock:B 3000 320 stock:B 3010 310 stock:B 3020 300 1) (integer) 3000 2) (integer) 3010 3) (integer) 3020 ``` Now, for each stock, calculate the average stock price per a 1000-millisecond timeframe, and then retrieve the stock with the maximum average for that timeframe. ``` 127.0.0.1:6379> TS.MRANGE - + WITHLABELS AGGREGATION avg 1000 FILTER type=stock GROUPBY type REDUCE max 1) 1) "type=stock" 2) 1) 1) "type" 2) "stock" 2) 1) "\_\_reducer\_\_" 2) "max" 3) 1) "\_\_source\_\_" 2) "stock:A,stock:B" 3) 1) 1) (integer) 1000 2) 110 2) 1) (integer) 2000 2) 210 3) 1) (integer) 3000 2) 310 ``` **Group query results** Query all time series with the metric label equal to `cpu`, then group the time series by the value of their `metric_name` label value and for each group return the maximum value and the time series keys (*source*) with that value. ``` 127.0.0.1:6379> TS.ADD ts1 1548149180000 90 labels metric cpu metric\_name system (integer) 1548149180000 127.0.0.1:6379> TS.ADD ts1 1548149185000 45 (integer) 1548149185000 127.0.0.1:6379> TS.ADD ts2 1548149180000 99 labels metric cpu metric\_name user (integer) 1548149180000 127.0.0.1:6379> TS.MRANGE - + WITHLABELS FILTER metric=cpu GROUPBY metric\_name REDUCE max 1) 1) "metric\_name=system" 2) 1) 1) "metric\_name" 2) "system" 2) 1) "\_\_reducer\_\_" 2) "max" 3) 1) "\_\_source\_\_" 2) "ts1" 3) 1) 1) (integer) 1548149180000 2) 90 2) 1) (integer) 1548149185000 2) 45 2) 1) "metric\_name=user" 2) 1) 1) "metric\_name" 2) "user" 2) 1) "\_\_reducer\_\_" 2) "max" 3) 1) "\_\_source\_\_" 2) "ts2" 3) 1) 1) (integer) 1548149180000 2) 99 ``` **Filter query by value** Query all time series with the metric label equal to `cpu`, then filter values larger or equal to 90.0 and smaller or equal to 100.0. ``` 127.0.0.1:6379> TS.ADD ts1 1548149180000 90 labels metric cpu metric\_name system (integer) 1548149180000 127.0.0.1:6379> TS.ADD ts1 1548149185000 45 (integer) 1548149185000 127.0.0.1:6379> TS.ADD ts2 1548149180000 99 labels metric cpu metric\_name user (integer) 1548149180000 127.0.0.1:6379> TS.MRANGE - + FILTER\_BY\_VALUE 90 100 WITHLABELS FILTER metric=cpu 1) 1) "ts1" 2) 1) 1) "metric" 2) "cpu" 2) 1) "metric\_name" 2) "system" 3) 1) 1) (integer) 1548149180000 2) 90 2) 1) "ts2" 2) 1) 1) "metric" 2) "cpu" 2) 1) "metric\_name" 2) "user" 3) 1) 1) (integer) 1548149180000 2) 99 ``` **Query using a label** Query all time series with the metric label equal to `cpu`, but only return the team label. ``` 127.0.0.1:6379> TS.ADD ts1 1548149180000 90 labels metric cpu metric\_name system team NY (integer) 1548149180000 127.0.0.1:6379> TS.ADD ts1 1548149185000 45 (integer) 1548149185000 127.0.0.1:6379> TS.ADD ts2 1548149180000 99 labels metric cpu metric\_name user team SF (integer) 1548149180000 127.0.0.1:6379> TS.MRANGE - + SELECTED\_LABELS team FILTER metric=cpu 1) 1) "ts1" 2) 1) 1) "team" 2) "NY" 3) 1) 1) (integer) 1548149180000 2) 90 2) 1) (integer) 1548149185000 2) 45 2) 1) "ts2" 2) 1) 1) "team" 2) "SF" 3) 1) 1) (integer) 1548149180000 2) 99 ``` See also -------- [`TS.RANGE`](../ts.range) | [`TS.MREVRANGE`](../ts.mrevrange) | [`TS.REVRANGE`](../ts.revrange) Related topics -------------- [RedisTimeSeries](https://redis.io/docs/stack/timeseries)
programming_docs
redis COMMAND COMMAND ======= ``` COMMAND COUNT ``` Syntax ``` COMMAND COUNT ``` Available since: 2.8.13 Time complexity: O(1) ACL categories: `@slow`, `@connection`, Returns [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) of number of total commands in this Redis server. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): number of commands returned by [`COMMAND`](../command) Examples -------- ``` COMMAND COUNT ``` redis CMS.QUERY CMS.QUERY ========= ``` CMS.QUERY ``` Syntax ``` CMS.QUERY key item [item ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(n) where n is the number of items Returns the count for one or more items in a sketch. ### Parameters: * **key**: The name of the sketch. * **item**: One or more items for which to return the count. Return ------ Count of one or more items [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) with a min-count of each of the items in the sketch. Examples -------- ``` redis> CMS.QUERY test foo bar 1) (integer) 10 2) (integer) 42 ``` redis SPOP SPOP ==== ``` SPOP ``` Syntax ``` SPOP key [count] ``` Available since: 1.0.0 Time complexity: Without the count argument O(1), otherwise O(N) where N is the value of the passed count. ACL categories: `@write`, `@set`, `@fast`, Removes and returns one or more random members from the set value store at `key`. This operation is similar to [`SRANDMEMBER`](../srandmember), that returns one or more random elements from a set but does not remove it. By default, the command pops a single member from the set. When provided with the optional `count` argument, the reply will consist of up to `count` members, depending on the set's cardinality. Return ------ When called without the `count` argument: [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the removed member, or `nil` when `key` does not exist. When called with the `count` argument: [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): the removed members, or an empty array when `key` does not exist. Examples -------- ``` SADD myset "one" SADD myset "two" SADD myset "three" SPOP myset SMEMBERS myset SADD myset "four" SADD myset "five" SPOP myset 3 SMEMBERS myset ``` Distribution of returned elements --------------------------------- Note that this command is not suitable when you need a guaranteed uniform distribution of the returned elements. For more information about the algorithms used for `SPOP`, look up both the Knuth sampling and Floyd sampling algorithms. History ------- * Starting with Redis version 3.2.0: Added the `count` argument. redis SCAN SCAN ==== ``` SCAN ``` Syntax ``` SCAN cursor [MATCH pattern] [COUNT count] [TYPE type] ``` Available since: 2.8.0 Time complexity: O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection. ACL categories: `@keyspace`, `@read`, `@slow`, The `SCAN` command and the closely related commands [`SSCAN`](../sscan), [`HSCAN`](../hscan) and [`ZSCAN`](../zscan) are used in order to incrementally iterate over a collection of elements. * `SCAN` iterates the set of keys in the currently selected Redis database. * [`SSCAN`](../sscan) iterates elements of Sets types. * [`HSCAN`](../hscan) iterates fields of Hash types and their associated values. * [`ZSCAN`](../zscan) iterates elements of Sorted Set types and their associated scores. Since these commands allow for incremental iteration, returning only a small number of elements per call, they can be used in production without the downside of commands like [`KEYS`](../keys) or [`SMEMBERS`](../smembers) that may block the server for a long time (even several seconds) when called against big collections of keys or elements. However while blocking commands like [`SMEMBERS`](../smembers) are able to provide all the elements that are part of a Set in a given moment, The SCAN family of commands only offer limited guarantees about the returned elements since the collection that we incrementally iterate can change during the iteration process. Note that `SCAN`, [`SSCAN`](../sscan), [`HSCAN`](../hscan) and [`ZSCAN`](../zscan) all work very similarly, so this documentation covers all the four commands. However an obvious difference is that in the case of [`SSCAN`](../sscan), [`HSCAN`](../hscan) and [`ZSCAN`](../zscan) the first argument is the name of the key holding the Set, Hash or Sorted Set value. The `SCAN` command does not need any key name argument as it iterates keys in the current database, so the iterated object is the database itself. SCAN basic usage ---------------- SCAN is a cursor based iterator. This means that at every call of the command, the server returns an updated cursor that the user needs to use as the cursor argument in the next call. An iteration starts when the cursor is set to 0, and terminates when the cursor returned by the server is 0. The following is an example of SCAN iteration: ``` redis 127.0.0.1:6379> scan 0 1) "17" 2) 1) "key:12" 2) "key:8" 3) "key:4" 4) "key:14" 5) "key:16" 6) "key:17" 7) "key:15" 8) "key:10" 9) "key:3" 10) "key:7" 11) "key:1" redis 127.0.0.1:6379> scan 17 1) "0" 2) 1) "key:5" 2) "key:18" 3) "key:0" 4) "key:2" 5) "key:19" 6) "key:13" 7) "key:6" 8) "key:9" 9) "key:11" ``` In the example above, the first call uses zero as a cursor, to start the iteration. The second call uses the cursor returned by the previous call as the first element of the reply, that is, 17. As you can see the **SCAN return value** is an array of two values: the first value is the new cursor to use in the next call, the second value is an array of elements. Since in the second call the returned cursor is 0, the server signaled to the caller that the iteration finished, and the collection was completely explored. Starting an iteration with a cursor value of 0, and calling `SCAN` until the returned cursor is 0 again is called a **full iteration**. Scan guarantees --------------- The `SCAN` command, and the other commands in the `SCAN` family, are able to provide to the user a set of guarantees associated to full iterations. * A full iteration always retrieves all the elements that were present in the collection from the start to the end of a full iteration. This means that if a given element is inside the collection when an iteration is started, and is still there when an iteration terminates, then at some point `SCAN` returned it to the user. * A full iteration never returns any element that was NOT present in the collection from the start to the end of a full iteration. So if an element was removed before the start of an iteration, and is never added back to the collection for all the time an iteration lasts, `SCAN` ensures that this element will never be returned. However because `SCAN` has very little state associated (just the cursor) it has the following drawbacks: * A given element may be returned multiple times. It is up to the application to handle the case of duplicated elements, for example only using the returned elements in order to perform operations that are safe when re-applied multiple times. * Elements that were not constantly present in the collection during a full iteration, may be returned or not: it is undefined. Number of elements returned at every SCAN call ---------------------------------------------- `SCAN` family functions do not guarantee that the number of elements returned per call are in a given range. The commands are also allowed to return zero elements, and the client should not consider the iteration complete as long as the returned cursor is not zero. However the number of returned elements is reasonable, that is, in practical terms SCAN may return a maximum number of elements in the order of a few tens of elements when iterating a large collection, or may return all the elements of the collection in a single call when the iterated collection is small enough to be internally represented as an encoded data structure (this happens for small sets, hashes and sorted sets). However there is a way for the user to tune the order of magnitude of the number of returned elements per call using the **COUNT** option. The COUNT option ---------------- While `SCAN` does not provide guarantees about the number of elements returned at every iteration, it is possible to empirically adjust the behavior of `SCAN` using the **COUNT** option. Basically with COUNT the user specified the *amount of work that should be done at every call in order to retrieve elements from the collection*. This is **just a hint** for the implementation, however generally speaking this is what you could expect most of the times from the implementation. * The default COUNT value is 10. * When iterating the key space, or a Set, Hash or Sorted Set that is big enough to be represented by a hash table, assuming no **MATCH** option is used, the server will usually return *count* or a bit more than *count* elements per call. Please check the *why SCAN may return all the elements at once* section later in this document. * When iterating Sets encoded as intsets (small sets composed of just integers), or Hashes and Sorted Sets encoded as ziplists (small hashes and sets composed of small individual values), usually all the elements are returned in the first `SCAN` call regardless of the COUNT value. Important: **there is no need to use the same COUNT value** for every iteration. The caller is free to change the count from one iteration to the other as required, as long as the cursor passed in the next call is the one obtained in the previous call to the command. The MATCH option ---------------- It is possible to only iterate elements matching a given glob-style pattern, similarly to the behavior of the [`KEYS`](../keys) command that takes a pattern as its only argument. To do so, just append the `MATCH <pattern>` arguments at the end of the `SCAN` command (it works with all the SCAN family commands). This is an example of iteration using **MATCH**: ``` redis 127.0.0.1:6379> sadd myset 1 2 3 foo foobar feelsgood (integer) 6 redis 127.0.0.1:6379> sscan myset 0 match f* 1) "0" 2) 1) "foo" 2) "feelsgood" 3) "foobar" redis 127.0.0.1:6379> ``` It is important to note that the **MATCH** filter is applied after elements are retrieved from the collection, just before returning data to the client. This means that if the pattern matches very little elements inside the collection, `SCAN` will likely return no elements in most iterations. An example is shown below: ``` redis 127.0.0.1:6379> scan 0 MATCH *11* 1) "288" 2) 1) "key:911" redis 127.0.0.1:6379> scan 288 MATCH *11* 1) "224" 2) (empty list or set) redis 127.0.0.1:6379> scan 224 MATCH *11* 1) "80" 2) (empty list or set) redis 127.0.0.1:6379> scan 80 MATCH *11* 1) "176" 2) (empty list or set) redis 127.0.0.1:6379> scan 176 MATCH *11* COUNT 1000 1) "0" 2) 1) "key:611" 2) "key:711" 3) "key:118" 4) "key:117" 5) "key:311" 6) "key:112" 7) "key:111" 8) "key:110" 9) "key:113" 10) "key:211" 11) "key:411" 12) "key:115" 13) "key:116" 14) "key:114" 15) "key:119" 16) "key:811" 17) "key:511" 18) "key:11" redis 127.0.0.1:6379> ``` As you can see most of the calls returned zero elements, but the last call where a COUNT of 1000 was used in order to force the command to do more scanning for that iteration. The TYPE option --------------- You can use the `TYPE` option to ask `SCAN` to only return objects that match a given `type`, allowing you to iterate through the database looking for keys of a specific type. The **TYPE** option is only available on the whole-database `SCAN`, not [`HSCAN`](../hscan) or [`ZSCAN`](../zscan) etc. The `type` argument is the same string name that the [`TYPE`](../type) command returns. Note a quirk where some Redis types, such as GeoHashes, HyperLogLogs, Bitmaps, and Bitfields, may internally be implemented using other Redis types, such as a string or zset, so can't be distinguished from other keys of that same type by `SCAN`. For example, a ZSET and GEOHASH: ``` redis 127.0.0.1:6379> GEOADD geokey 0 0 value (integer) 1 redis 127.0.0.1:6379> ZADD zkey 1000 value (integer) 1 redis 127.0.0.1:6379> TYPE geokey zset redis 127.0.0.1:6379> TYPE zkey zset redis 127.0.0.1:6379> SCAN 0 TYPE zset 1) "0" 2) 1) "geokey" 2) "zkey" ``` It is important to note that the **TYPE** filter is also applied after elements are retrieved from the database, so the option does not reduce the amount of work the server has to do to complete a full iteration, and for rare types you may receive no elements in many iterations. Multiple parallel iterations ---------------------------- It is possible for an infinite number of clients to iterate the same collection at the same time, as the full state of the iterator is in the cursor, that is obtained and returned to the client at every call. No server side state is taken at all. Terminating iterations in the middle ------------------------------------ Since there is no state server side, but the full state is captured by the cursor, the caller is free to terminate an iteration half-way without signaling this to the server in any way. An infinite number of iterations can be started and never terminated without any issue. Calling SCAN with a corrupted cursor ------------------------------------ Calling `SCAN` with a broken, negative, out of range, or otherwise invalid cursor, will result in undefined behavior but never in a crash. What will be undefined is that the guarantees about the returned elements can no longer be ensured by the `SCAN` implementation. The only valid cursors to use are: * The cursor value of 0 when starting an iteration. * The cursor returned by the previous call to SCAN in order to continue the iteration. Guarantee of termination ------------------------ The `SCAN` algorithm is guaranteed to terminate only if the size of the iterated collection remains bounded to a given maximum size, otherwise iterating a collection that always grows may result into `SCAN` to never terminate a full iteration. This is easy to see intuitively: if the collection grows there is more and more work to do in order to visit all the possible elements, and the ability to terminate the iteration depends on the number of calls to `SCAN` and its COUNT option value compared with the rate at which the collection grows. Why SCAN may return all the items of an aggregate data type in a single call? ----------------------------------------------------------------------------- In the `COUNT` option documentation, we state that sometimes this family of commands may return all the elements of a Set, Hash or Sorted Set at once in a single call, regardless of the `COUNT` option value. The reason why this happens is that the cursor-based iterator can be implemented, and is useful, only when the aggregate data type that we are scanning is represented as a hash table. However Redis uses a [memory optimization](https://redis.io/topics/memory-optimization) where small aggregate data types, until they reach a given amount of items or a given max size of single elements, are represented using a compact single-allocation packed encoding. When this is the case, `SCAN` has no meaningful cursor to return, and must iterate the whole data structure at once, so the only sane behavior it has is to return everything in a call. However once the data structures are bigger and are promoted to use real hash tables, the `SCAN` family of commands will resort to the normal behavior. Note that since this special behavior of returning all the elements is true only for small aggregates, it has no effects on the command complexity or latency. However the exact limits to get converted into real hash tables are [user configurable](https://redis.io/topics/memory-optimization), so the maximum number of elements you can see returned in a single call depends on how big an aggregate data type could be and still use the packed representation. Also note that this behavior is specific of [`SSCAN`](../sscan), [`HSCAN`](../hscan) and [`ZSCAN`](../zscan). `SCAN` itself never shows this behavior because the key space is always represented by hash tables. Return value ------------ `SCAN`, [`SSCAN`](../sscan), [`HSCAN`](../hscan) and [`ZSCAN`](../zscan) return a two elements multi-bulk reply, where the first element is a string representing an unsigned 64 bit number (the cursor), and the second element is a multi-bulk with an array of elements. * `SCAN` array of elements is a list of keys. * [`SSCAN`](../sscan) array of elements is a list of Set members. * [`HSCAN`](../hscan) array of elements contain two elements, a field and a value, for every returned element of the Hash. * [`ZSCAN`](../zscan) array of elements contain two elements, a member and its associated score, for every returned element of the sorted set. Additional examples ------------------- Iteration of a Hash value. ``` redis 127.0.0.1:6379> hmset hash name Jack age 33 OK redis 127.0.0.1:6379> hscan hash 0 1) "0" 2) 1) "name" 2) "Jack" 3) "age" 4) "33" ``` History ------- * Starting with Redis version 6.0.0: Added the `TYPE` subcommand. redis FT.ALIASDEL FT.ALIASDEL =========== ``` FT.ALIASDEL ``` Syntax ``` FT.ALIASDEL alias ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.0.0](https://redis.io/docs/stack/search) Time complexity: O(1) Remove an alias from an index [Examples](#examples) Required arguments ------------------ `alias` is index alias to be removed. Return ------ FT.ALIASDEL returns a simple string reply `OK` if executed correctly, or an error reply otherwise. Examples -------- **Remove an alias from an index** Remove an alias from an index. ``` 127.0.0.1:6379> FT.ALIASDEL alias OK ``` See also -------- [`FT.ALIASADD`](../ft.aliasadd) | [`FT.ALIASUPDATE`](../ft.aliasupdate) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) redis UNSUBSCRIBE UNSUBSCRIBE =========== ``` UNSUBSCRIBE ``` Syntax ``` UNSUBSCRIBE [channel [channel ...]] ``` Available since: 2.0.0 Time complexity: O(N) where N is the number of clients already subscribed to a channel. ACL categories: `@pubsub`, `@slow`, Unsubscribes the client from the given channels, or from all of them if none is given. When no channels are specified, the client is unsubscribed from all the previously subscribed channels. In this case, a message for every unsubscribed channel will be sent to the client. Return ------ When successful, this command doesn't return anything. Instead, for each channel, one message with the first element being the string "unsubscribe" is pushed as a confirmation that the command succeeded. redis BF.LOADCHUNK BF.LOADCHUNK ============ ``` BF.LOADCHUNK ``` Syntax ``` BF.LOADCHUNK key iterator data ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(n), where n is the capacity Restores a filter previously saved using `SCANDUMP`. See the `SCANDUMP` command for example usage. This command overwrites any bloom filter stored under `key`. Make sure that the bloom filter is not be changed between invocations. ### Parameters * **key**: Name of the key to restore * **iter**: Iterator value associated with `data` (returned by `SCANDUMP`) * **data**: Current data chunk (returned by `SCANDUMP`) Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) - `OK` if executed correctly, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) otherwise. Examples -------- See BF.SCANDUMP for an example.
programming_docs
redis MSET MSET ==== ``` MSET ``` Syntax ``` MSET key value [key value ...] ``` Available since: 1.0.1 Time complexity: O(N) where N is the number of keys to set. ACL categories: `@write`, `@string`, `@slow`, Sets the given keys to their respective values. `MSET` replaces existing values with new values, just as regular [`SET`](../set). See [`MSETNX`](../msetnx) if you don't want to overwrite existing values. `MSET` is atomic, so all given keys are set at once. It is not possible for clients to see that some of the keys were updated while others are unchanged. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): always `OK` since `MSET` can't fail. Examples -------- ``` MSET key1 "Hello" key2 "World" GET key1 GET key2 ``` redis ACL ACL === ``` ACL GETUSER ``` Syntax ``` ACL GETUSER username ``` Available since: 6.0.0 Time complexity: O(N). Where N is the number of password, command and pattern rules that the user has. ACL categories: `@admin`, `@slow`, `@dangerous`, The command returns all the rules defined for an existing ACL user. Specifically, it lists the user's ACL flags, password hashes, commands, key patterns, channel patterns (Added in version 6.2) and selectors (Added in version 7.0). Additional information may be returned in the future if more metadata is added to the user. Command rules are always returned in the same format as the one used in the [`ACL SETUSER`](../acl-setuser) command. Before version 7.0, keys and channels were returned as an array of patterns, however in version 7.0 later they are now also returned in same format as the one used in the [`ACL SETUSER`](../acl-setuser) command. Note: This description of command rules reflects the user's effective permissions, so while it may not be identical to the set of rules used to configure the user, it is still functionally identical. Selectors are listed in the order they were applied to the user, and include information about commands, key patterns, and channel patterns. [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a list of ACL rule definitions for the user. If `user` does not exist a [Null reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings) is returned. Examples -------- Here's an example configuration for a user ``` > ACL SETUSER sample on nopass +GET allkeys &* (+SET ~key2) "OK" > ACL GETUSER sample 1) "flags" 2) 1) "on" 2) "allkeys" 3) "nopass" 3) "passwords" 4) (empty array) 5) "commands" 6) "+@all" 7) "keys" 8) "~*" 9) "channels" 10) "&*" 11) "selectors" 12) 1) 1) "commands" 6) "+SET" 7) "keys" 8) "~key2" 9) "channels" 10) "&*" ``` History ------- * Starting with Redis version 6.2.0: Added Pub/Sub channel patterns. * Starting with Redis version 7.0.0: Added selectors and changed the format of key and channel patterns from a list to their rule representation. redis BGSAVE BGSAVE ====== ``` BGSAVE ``` Syntax ``` BGSAVE [SCHEDULE] ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, Save the DB in background. Normally the OK code is immediately returned. Redis forks, the parent continues to serve the clients, the child saves the DB on disk then exits. An error is returned if there is already a background save running or if there is another non-background-save process running, specifically an in-progress AOF rewrite. If `BGSAVE SCHEDULE` is used, the command will immediately return `OK` when an AOF rewrite is in progress and schedule the background save to run at the next opportunity. A client may be able to check if the operation succeeded using the [`LASTSAVE`](../lastsave) command. Please refer to the [persistence documentation](https://redis.io/topics/persistence) for detailed information. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `Background saving started` if `BGSAVE` started correctly or `Background saving scheduled` when used with the `SCHEDULE` subcommand. History ------- * Starting with Redis version 3.2.2: Added the `SCHEDULE` option. redis XAUTOCLAIM XAUTOCLAIM ========== ``` XAUTOCLAIM ``` Syntax ``` XAUTOCLAIM key group consumer min-idle-time start [COUNT count] [JUSTID] ``` Available since: 6.2.0 Time complexity: O(1) if COUNT is small. ACL categories: `@write`, `@stream`, `@fast`, This command transfers ownership of pending stream entries that match the specified criteria. Conceptually, `XAUTOCLAIM` is equivalent to calling [`XPENDING`](../xpending) and then [`XCLAIM`](../xclaim), but provides a more straightforward way to deal with message delivery failures via [`SCAN`](../scan)-like semantics. Like [`XCLAIM`](../xclaim), the command operates on the stream entries at `<key>` and in the context of the provided `<group>`. It transfers ownership to `<consumer>` of messages pending for more than `<min-idle-time>` milliseconds and having an equal or greater ID than `<start>`. The optional `<count>` argument, which defaults to 100, is the upper limit of the number of entries that the command attempts to claim. Internally, the command begins scanning the consumer group's Pending Entries List (PEL) from `<start>` and filters out entries having an idle time less than or equal to `<min-idle-time>`. The maximum number of pending entries that the command scans is the product of multiplying `<count>`'s value by 10 (hard-coded). It is possible, therefore, that the number of entries claimed will be less than the specified value. The optional `JUSTID` argument changes the reply to return just an array of IDs of messages successfully claimed, without returning the actual message. Using this option means the retry counter is not incremented. The command returns the claimed entries as an array. It also returns a stream ID intended for cursor-like use as the `<start>` argument for its subsequent call. When there are no remaining PEL entries, the command returns the special `0-0` ID to signal completion. However, note that you may want to continue calling `XAUTOCLAIM` even after the scan is complete with the `0-0` as `<start>` ID, because enough time passed, so older pending entries may now be eligible for claiming. Note that only messages that are idle longer than `<min-idle-time>` are claimed, and claiming a message resets its idle time. This ensures that only a single consumer can successfully claim a given pending message at a specific instant of time and trivially reduces the probability of processing the same message multiple times. While iterating the PEL, if `XAUTOCLAIM` stumbles upon a message which doesn't exist in the stream anymore (either trimmed or deleted by [`XDEL`](../xdel)) it does not claim it, and deletes it from the PEL in which it was found. This feature was introduced in Redis 7.0. These message IDs are returned to the caller as a part of `XAUTOCLAIM`s reply. Lastly, claiming a message with `XAUTOCLAIM` also increments the attempted deliveries count for that message, unless the `JUSTID` option has been specified (which only delivers the message ID, not the message itself). Messages that cannot be processed for some reason - for example, because consumers systematically crash when processing them - will exhibit high attempted delivery counts that can be detected by monitoring. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays), specifically: An array with three elements: 1. A stream ID to be used as the `<start>` argument for the next call to `XAUTOCLAIM`. 2. An array containing all the successfully claimed messages in the same format as [`XRANGE`](../xrange). 3. An array containing message IDs that no longer exist in the stream, and were deleted from the PEL in which they were found. Examples -------- ``` > XAUTOCLAIM mystream mygroup Alice 3600000 0-0 COUNT 25 1) "0-0" 2) 1) 1) "1609338752495-0" 2) 1) "field" 2) "value" 3) (empty array) ``` In the above example, we attempt to claim up to 25 entries that are pending and idle (not having been acknowledged or claimed) for at least an hour, starting at the stream's beginning. The consumer "Alice" from the "mygroup" group acquires ownership of these messages. Note that the stream ID returned in the example is `0-0`, indicating that the entire stream was scanned. We can also see that `XAUTOCLAIM` did not stumble upon any deleted messages (the third reply element is an empty array). History ------- * Starting with Redis version 7.0.0: Added an element to the reply array, containing deleted entries the command cleared from the PEL redis REPLCONF REPLCONF ======== ``` REPLCONF ``` Syntax ``` REPLCONF ``` Available since: 3.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, The `REPLCONF` command is an internal command. It is used by a Redis master to configure a connected replica. redis PSYNC PSYNC ===== ``` PSYNC ``` Syntax ``` PSYNC replicationid offset ``` Available since: 2.8.0 Time complexity: ACL categories: `@admin`, `@slow`, `@dangerous`, Initiates a replication stream from the master. The `PSYNC` command is called by Redis replicas for initiating a replication stream from the master. For more information about replication in Redis please check the [replication page](https://redis.io/topics/replication). Return ------ **Non standard return value**, a bulk transfer of the data followed by [`PING`](../ping) and write requests from the master. redis SUNIONSTORE SUNIONSTORE =========== ``` SUNIONSTORE ``` Syntax ``` SUNIONSTORE destination key [key ...] ``` Available since: 1.0.0 Time complexity: O(N) where N is the total number of elements in all given sets. ACL categories: `@write`, `@set`, `@slow`, This command is equal to [`SUNION`](../sunion), but instead of returning the resulting set, it is stored in `destination`. If `destination` already exists, it is overwritten. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of elements in the resulting set. Examples -------- ``` SADD key1 "a" SADD key1 "b" SADD key1 "c" SADD key2 "c" SADD key2 "d" SADD key2 "e" SUNIONSTORE key key1 key2 SMEMBERS key ``` redis ZDIFFSTORE ZDIFFSTORE ========== ``` ZDIFFSTORE ``` Syntax ``` ZDIFFSTORE destination numkeys key [key ...] ``` Available since: 6.2.0 Time complexity: O(L + (N-K)log(N)) worst case where L is the total number of elements in all the sets, N is the size of the first set, and K is the size of the result set. ACL categories: `@write`, `@sortedset`, `@slow`, Computes the difference between the first and all successive input sorted sets and stores the result in `destination`. The total number of input keys is specified by `numkeys`. Keys that do not exist are considered to be empty sets. If `destination` already exists, it is overwritten. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of elements in the resulting sorted set at `destination`. Examples -------- ``` ZADD zset1 1 "one" ZADD zset1 2 "two" ZADD zset1 3 "three" ZADD zset2 1 "one" ZADD zset2 2 "two" ZDIFFSTORE out 2 zset1 zset2 ZRANGE out 0 -1 WITHSCORES ``` redis FUNCTION FUNCTION ======== ``` FUNCTION STATS ``` Syntax ``` FUNCTION STATS ``` Available since: 7.0.0 Time complexity: O(1) ACL categories: `@slow`, `@scripting`, Return information about the function that's currently running and information about the available execution engines. The reply is map with two keys: 1. `running_script`: information about the running script. If there's no in-flight function, the server replies with a *nil*. Otherwise, this is a map with the following keys: * **name:** the name of the function. * **command:** the command and arguments used for invoking the function. * **duration\_ms:** the function's runtime duration in milliseconds. 2. `engines`: this is a map of maps. Each entry in the map represent a single engine. Engine map contains statistics about the engine like number of functions and number of libraries. You can use this command to inspect the invocation of a long-running function and decide whether kill it with the [`FUNCTION KILL`](../function-kill) command. For more information please refer to [Introduction to Redis Functions](https://redis.io/topics/functions-intro). Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) redis LMOVE LMOVE ===== ``` LMOVE ``` Syntax ``` LMOVE source destination <LEFT | RIGHT> <LEFT | RIGHT> ``` Available since: 6.2.0 Time complexity: O(1) ACL categories: `@write`, `@list`, `@slow`, Atomically returns and removes the first/last element (head/tail depending on the `wherefrom` argument) of the list stored at `source`, and pushes the element at the first/last element (head/tail depending on the `whereto` argument) of the list stored at `destination`. For example: consider `source` holding the list `a,b,c`, and `destination` holding the list `x,y,z`. Executing `LMOVE source destination RIGHT LEFT` results in `source` holding `a,b` and `destination` holding `c,x,y,z`. If `source` does not exist, the value `nil` is returned and no operation is performed. If `source` and `destination` are the same, the operation is equivalent to removing the first/last element from the list and pushing it as first/last element of the list, so it can be considered as a list rotation command (or a no-op if `wherefrom` is the same as `whereto`). This command comes in place of the now deprecated [`RPOPLPUSH`](../rpoplpush). Doing `LMOVE RIGHT LEFT` is equivalent. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the element being popped and pushed. Examples -------- ``` RPUSH mylist "one" RPUSH mylist "two" RPUSH mylist "three" LMOVE mylist myotherlist RIGHT LEFT LMOVE mylist myotherlist LEFT RIGHT LRANGE mylist 0 -1 LRANGE myotherlist 0 -1 ``` Pattern: Reliable queue ----------------------- Redis is often used as a messaging server to implement processing of background jobs or other kinds of messaging tasks. A simple form of queue is often obtained pushing values into a list in the producer side, and waiting for this values in the consumer side using [`RPOP`](../rpop) (using polling), or [`BRPOP`](../brpop) if the client is better served by a blocking operation. However in this context the obtained queue is not *reliable* as messages can be lost, for example in the case there is a network problem or if the consumer crashes just after the message is received but it is still to process. `LMOVE` (or [`BLMOVE`](../blmove) for the blocking variant) offers a way to avoid this problem: the consumer fetches the message and at the same time pushes it into a *processing* list. It will use the [`LREM`](../lrem) command in order to remove the message from the *processing* list once the message has been processed. An additional client may monitor the *processing* list for items that remain there for too much time, and will push those timed out items into the queue again if needed. Pattern: Circular list ---------------------- Using `LMOVE` with the same source and destination key, a client can visit all the elements of an N-elements list, one after the other, in O(N) without transferring the full list from the server to the client using a single [`LRANGE`](../lrange) operation. The above pattern works even in the following conditions: * There are multiple clients rotating the list: they'll fetch different elements, until all the elements of the list are visited, and the process restarts. * Even if other clients are actively pushing new items at the end of the list. The above makes it very simple to implement a system where a set of items must be processed by N workers continuously as fast as possible. An example is a monitoring system that must check that a set of web sites are reachable, with the smallest delay possible, using a number of parallel workers. Note that this implementation of workers is trivially scalable and reliable, because even if a message is lost the item is still in the queue and will be processed at the next iteration. redis TIME TIME ==== ``` TIME ``` Syntax ``` TIME ``` Available since: 2.6.0 Time complexity: O(1) ACL categories: `@fast`, The `TIME` command returns the current server time as a two items lists: a Unix timestamp and the amount of microseconds already elapsed in the current second. Basically the interface is very similar to the one of the `gettimeofday` system call. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays), specifically: A multi bulk reply containing two elements: * unix time in seconds. * microseconds. Examples -------- ``` TIME TIME ``` redis TDIGEST.MERGE TDIGEST.MERGE ============= ``` TDIGEST.MERGE ``` Syntax ``` TDIGEST.MERGE destination-key numkeys source-key [source-key ...] [COMPRESSION compression] [OVERRIDE] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.4.0](https://redis.io/docs/stack/bloom) Time complexity: O(N\*K), where N is the number of centroids and K being the number of input sketches Merges multiple t-digest sketches into a single sketch. Required arguments ------------------ `destination-key` is key name for a t-digest sketch to merge observation values to. If `destination-key` does not exist - a new sketch is created. If `destination-key` is an existing sketch, its values are merged with the values of the source keys. To override the destination key contents use `OVERRIDE`. `numkeys` Number of sketches to merge observation values from (1 or more). `source-key` each is a key name for a t-digest sketch to merge observation values from. Optional arguments ------------------ `COMPRESSION compression` is a controllable tradeoff between accuracy and memory consumption. 100 is a common value for normal uses. 1000 is more accurate. If no value is passed by default the compression will be 100. For more information on scaling of accuracy versus the compression parameter see [*The t-digest: Efficient estimates of distributions*](https://www.sciencedirect.com/science/article/pii/S2665963820300403). When `COMPRESSION` is not specified: * If `destination-key` does not exist or if `OVERRIDE` is specified, the compression is set to the maximal value among all source sketches. * If `destination-key` already exists and `OVERRIDE` is not specified, its compression is not changed. `OVERRIDE` When specified, if `destination-key` already exists, it is overwritten. Return value ------------ OK on success, error otherwise. Examples -------- ``` redis> TDIGEST.CREATE s1 OK redis> TDIGEST.CREATE s2 OK redis> TDIGEST.ADD s1 10.0 20.0 OK redis> TDIGEST.ADD s2 30.0 40.0 OK redis> TDIGEST.MERGE sM 2 s1 s2 OK redis> TDIGEST.BYRANK sM 0 1 2 3 4 1) "10" 2) "20" 3) "30" 4) "40" 5) "inf" ``` redis FT.TAGVALS FT.TAGVALS ========== ``` FT.TAGVALS ``` Syntax ``` FT.TAGVALS index field_name ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.0.0](https://redis.io/docs/stack/search) Time complexity: O(N) Return a distinct set of values indexed in a Tag field [Examples](#examples) Required arguments ------------------ `index` is full-text index name. You must first create the index using [`FT.CREATE`](../ft.create). `field_name` is name of a Tag file defined in the schema. Use FT.TAGVALS if your tag indexes things like cities, categories, and so on. Limitations ----------- FT.TAGVALS provides no paging or sorting, and the tags are not alphabetically sorted. FT.TAGVALS only operates on [tag fields](https://redis.io/docs/stack/search/reference/tags). The returned strings are lowercase with whitespaces removed, but otherwise unchanged. Return ------ FT.TAGVALS returns an array reply of all distinct tags in the tag index. Examples -------- **Return a set of values indexed in a Tag field** ``` 127.0.0.1:6379> FT.TAGVALS idx myTag 1) "Hello" 2) "World" ``` See also -------- [`FT.CREATE`](../ft.create) Related topics -------------- * [Tag fields](https://redis.io/docs/stack/search/reference/tags) * [RediSearch](https://redis.io/docs/stack/search)
programming_docs
redis MODULE MODULE ====== ``` MODULE LOADEX ``` Syntax ``` MODULE LOADEX path [CONFIG name value [CONFIG name value ...]] [ARGS args [args ...]] ``` Available since: 7.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, Loads a module from a dynamic library at runtime with configuration directives. This is an extended version of the [`MODULE LOAD`](../module-load) command. It loads and initializes the Redis module from the dynamic library specified by the `path` argument. The `path` should be the absolute path of the library, including the full filename. You can use the optional `CONFIG` argument to provide the module with configuration directives. Any additional arguments that follow the `ARGS` keyword are passed unmodified to the module. **Note**: modules can also be loaded at server startup with `loadmodule` configuration directive in `redis.conf`. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if module was loaded. redis ACL ACL === ``` ACL LOG ``` Syntax ``` ACL LOG [count | RESET] ``` Available since: 6.0.0 Time complexity: O(N) with N being the number of entries shown. ACL categories: `@admin`, `@slow`, `@dangerous`, The command shows a list of recent ACL security events: 1. Failures to authenticate their connections with [`AUTH`](../auth) or [`HELLO`](../hello). 2. Commands denied because against the current ACL rules. 3. Commands denied because accessing keys not allowed in the current ACL rules. The optional argument specifies how many entries to show. By default up to ten failures are returned. The special [`RESET`](../reset) argument clears the log. Entries are displayed starting from the most recent. Return ------ When called to show security events: [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a list of ACL security events. When called with [`RESET`](../reset): [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if the security log was cleared. Examples -------- ``` > AUTH someuser wrongpassword (error) WRONGPASS invalid username-password pair > ACL LOG 1 1) 1) "count" 2) (integer) 1 3) "reason" 4) "auth" 5) "context" 6) "toplevel" 7) "object" 8) "AUTH" 9) "username" 10) "someuser" 11) "age-seconds" 12) "8.038" 13) "client-info" 14) "id=3 addr=127.0.0.1:57275 laddr=127.0.0.1:6379 fd=8 name= age=16 idle=0 flags=N db=0 sub=0 psub=0 ssub=0 multi=-1 qbuf=48 qbuf-free=16842 argv-mem=25 multi-mem=0 rbs=1024 rbp=0 obl=0 oll=0 omem=0 tot-mem=18737 events=r cmd=auth user=default redir=-1 resp=2" 15) "entry-id" 16) (integer) 0 17) "timestamp-created" 18) (integer) 1675361492408 19) "timestamp-last-updated" 20) (integer) 1675361492408 ``` Each log entry is composed of the following fields: 1. `count`: The number of security events detected within a 60 second period that are represented by this entry. 2. `reason`: The reason that the security events were logged. Either `command`, `key`, `channel`, or `auth`. 3. `context`: The context that the security events were detected in. Either `toplevel`, `multi`, `lua`, or `module`. 4. `object`: The resource that the user had insufficient permissions to access. `auth` when the reason is `auth`. 5. `username`: The username that executed the command that caused the security events or the username that had a failed authentication attempt. 6. `age-seconds`: Age of the log entry in seconds. 7. `client-info`: Displays the client info of a client which caused one of the security events. 8. `entry-id`: The sequence number of the entry (starting at 0) since the server process started. Can also be used to check if items were “lost”, if they fell between periods. 9. `timestamp-created`: A UNIX timestamp in `milliseconds` at the time the entry was first created. 10. `timestamp-last-updated`: A UNIX timestamp in `milliseconds` at the time the entry was last updated. History ------- * Starting with Redis version 7.2.0: Added entry ID, timestamp created, and timestamp last updated. redis RPUSHX RPUSHX ====== ``` RPUSHX ``` Syntax ``` RPUSHX key element [element ...] ``` Available since: 2.2.0 Time complexity: O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments. ACL categories: `@write`, `@list`, `@fast`, Inserts specified values at the tail of the list stored at `key`, only if `key` already exists and holds a list. In contrary to [`RPUSH`](../rpush), no operation will be performed when `key` does not yet exist. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the length of the list after the push operation. Examples -------- ``` RPUSH mylist "Hello" RPUSHX mylist "World" RPUSHX myotherlist "World" LRANGE mylist 0 -1 LRANGE myotherlist 0 -1 ``` History ------- * Starting with Redis version 4.0.0: Accepts multiple `element` arguments. redis JSON.DEBUG JSON.DEBUG ========== ``` JSON.DEBUG MEMORY ``` Syntax ``` JSON.DEBUG MEMORY key [path] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: O(N) when path is evaluated to a single value, where N is the size of the value, O(N) when path is evaluated to multiple values, where N is the size of the key Report a value's memory usage in bytes [Examples](#examples) Required arguments ------------------ `key` is key to parse. Optional arguments ------------------ `path` is JSONPath to specify. Default is root `$`. Return ------ JSON.DEBUG MEMORY returns an integer reply specified as the value size in bytes. For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Examples -------- **Report a value's memory usage in bytes** Create a JSON document. ``` 127.0.0.1:6379> JSON.SET item:2 $ '{"name":"Wireless earbuds","description":"Wireless Bluetooth in-ear headphones","connection":{"wireless":true,"type":"Bluetooth"},"price":64.99,"stock":17,"colors":["black","white"], "max\_level":[80, 100, 120]}' OK ``` Get the values' memory usage in bytes. ``` 127.0.0.1:6379> JSON.DEBUG MEMORY item:2 (integer) 253 ``` See also -------- [`JSON.SET`](../json.set) | [`JSON.ARRLEN`](../json.arrlen) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json) redis HRANDFIELD HRANDFIELD ========== ``` HRANDFIELD ``` Syntax ``` HRANDFIELD key [count [WITHVALUES]] ``` Available since: 6.2.0 Time complexity: O(N) where N is the number of fields returned ACL categories: `@read`, `@hash`, `@slow`, When called with just the `key` argument, return a random field from the hash value stored at `key`. If the provided `count` argument is positive, return an array of **distinct fields**. The array's length is either `count` or the hash's number of fields ([`HLEN`](../hlen)), whichever is lower. If called with a negative `count`, the behavior changes and the command is allowed to return the **same field multiple times**. In this case, the number of returned fields is the absolute value of the specified `count`. The optional `WITHVALUES` modifier changes the reply so it includes the respective values of the randomly selected hash fields. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): without the additional `count` argument, the command returns a Bulk Reply with the randomly selected field, or `nil` when `key` does not exist. [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): when the additional `count` argument is passed, the command returns an array of fields, or an empty array when `key` does not exist. If the `WITHVALUES` modifier is used, the reply is a list fields and their values from the hash. Examples -------- ``` HMSET coin heads obverse tails reverse edge null HRANDFIELD coin HRANDFIELD coin HRANDFIELD coin -5 WITHVALUES ``` Specification of the behavior when count is passed -------------------------------------------------- When the `count` argument is a positive value this command behaves as follows: * No repeated fields are returned. * If `count` is bigger than the number of fields in the hash, the command will only return the whole hash without additional fields. * The order of fields in the reply is not truly random, so it is up to the client to shuffle them if needed. When the `count` is a negative value, the behavior changes as follows: * Repeating fields are possible. * Exactly `count` fields, or an empty array if the hash is empty (non-existing key), are always returned. * The order of fields in the reply is truly random. redis CMS.INITBYPROB CMS.INITBYPROB ============== ``` CMS.INITBYPROB ``` Syntax ``` CMS.INITBYPROB key error probability ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(1) Initializes a Count-Min Sketch to accommodate requested tolerances. ### Parameters: * **key**: The name of the sketch. * **error**: Estimate size of error. The error is a percent of total counted items. This effects the width of the sketch. * **probability**: The desired probability for inflated count. This should be a decimal value between 0 and 1. This effects the depth of the sketch. For example, for a desired false positive rate of 0.1% (1 in 1000), error\_rate should be set to 0.001. The closer this number is to zero, the greater the memory consumption per item and the more CPU usage per operation. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) - `OK` if executed correctly, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) otherwise. Examples -------- ``` redis> CMS.INITBYPROB test 0.001 0.01 OK ``` redis HMGET HMGET ===== ``` HMGET ``` Syntax ``` HMGET key field [field ...] ``` Available since: 2.0.0 Time complexity: O(N) where N is the number of fields being requested. ACL categories: `@read`, `@hash`, `@fast`, Returns the values associated with the specified `fields` in the hash stored at `key`. For every `field` that does not exist in the hash, a `nil` value is returned. Because non-existing keys are treated as empty hashes, running `HMGET` against a non-existing `key` will return a list of `nil` values. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of values associated with the given fields, in the same order as they are requested. ``` HSET myhash field1 "Hello" HSET myhash field2 "World" HMGET myhash field1 field2 nofield ``` redis MSETNX MSETNX ====== ``` MSETNX ``` Syntax ``` MSETNX key value [key value ...] ``` Available since: 1.0.1 Time complexity: O(N) where N is the number of keys to set. ACL categories: `@write`, `@string`, `@slow`, Sets the given keys to their respective values. `MSETNX` will not perform any operation at all even if just a single key already exists. Because of this semantic `MSETNX` can be used in order to set different keys representing different fields of a unique logic object in a way that ensures that either all the fields or none at all are set. `MSETNX` is atomic, so all given keys are set at once. It is not possible for clients to see that some of the keys were updated while others are unchanged. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: * `1` if the all the keys were set. * `0` if no key was set (at least one key already existed). Examples -------- ``` MSETNX key1 "Hello" key2 "there" MSETNX key2 "new" key3 "world" MGET key1 key2 key3 ``` redis GETEX GETEX ===== ``` GETEX ``` Syntax ``` GETEX key [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | PERSIST] ``` Available since: 6.2.0 Time complexity: O(1) ACL categories: `@write`, `@string`, `@fast`, Get the value of `key` and optionally set its expiration. `GETEX` is similar to [`GET`](../get), but is a write command with additional options. Options ------- The `GETEX` command supports a set of options that modify its behavior: * `EX` *seconds* -- Set the specified expire time, in seconds. * `PX` *milliseconds* -- Set the specified expire time, in milliseconds. * `EXAT` *timestamp-seconds* -- Set the specified Unix time at which the key will expire, in seconds. * `PXAT` *timestamp-milliseconds* -- Set the specified Unix time at which the key will expire, in milliseconds. * [`PERSIST`](../persist) -- Remove the time to live associated with the key. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the value of `key`, or `nil` when `key` does not exist. Examples -------- ``` SET mykey "Hello" GETEX mykey TTL mykey GETEX mykey EX 60 TTL mykey ``` redis XADD XADD ==== ``` XADD ``` Syntax ``` XADD key [NOMKSTREAM] [<MAXLEN | MINID> [= | ~] threshold [LIMIT count]] <* | id> field value [field value ...] ``` Available since: 5.0.0 Time complexity: O(1) when adding a new entry, O(N) when trimming where N being the number of entries evicted. ACL categories: `@write`, `@stream`, `@fast`, Appends the specified stream entry to the stream at the specified key. If the key does not exist, as a side effect of running this command the key is created with a stream value. The creation of stream's key can be disabled with the `NOMKSTREAM` option. An entry is composed of a list of field-value pairs. The field-value pairs are stored in the same order they are given by the user. Commands that read the stream, such as [`XRANGE`](../xrange) or [`XREAD`](../xread), are guaranteed to return the fields and values exactly in the same order they were added by `XADD`. `XADD` is the *only Redis command* that can add data to a stream, but there are other commands, such as [`XDEL`](../xdel) and [`XTRIM`](../xtrim), that are able to remove data from a stream. Specifying a Stream ID as an argument ------------------------------------- A stream entry ID identifies a given entry inside a stream. The `XADD` command will auto-generate a unique ID for you if the ID argument specified is the `*` character (asterisk ASCII character). However, while useful only in very rare cases, it is possible to specify a well-formed ID, so that the new entry will be added exactly with the specified ID. IDs are specified by two numbers separated by a `-` character: ``` 1526919030474-55 ``` Both quantities are 64-bit numbers. When an ID is auto-generated, the first part is the Unix time in milliseconds of the Redis instance generating the ID. The second part is just a sequence number and is used in order to distinguish IDs generated in the same millisecond. You can also specify an incomplete ID, that consists only of the milliseconds part, which is interpreted as a zero value for sequence part. To have only the sequence part automatically generated, specify the milliseconds part followed by the `-` separator and the `*` character: ``` > XADD mystream 1526919030474-55 message "Hello," "1526919030474-55" > XADD mystream 1526919030474-* message " World!" "1526919030474-56" ``` IDs are guaranteed to be always incremental: If you compare the ID of the entry just inserted it will be greater than any other past ID, so entries are totally ordered inside a stream. In order to guarantee this property, if the current top ID in the stream has a time greater than the current local time of the instance, the top entry time will be used instead, and the sequence part of the ID incremented. This may happen when, for instance, the local clock jumps backward, or if after a failover the new master has a different absolute time. When a user specified an explicit ID to `XADD`, the minimum valid ID is `0-1`, and the user *must* specify an ID which is greater than any other ID currently inside the stream, otherwise the command will fail and return an error. Usually resorting to specific IDs is useful only if you have another system generating unique IDs (for instance an SQL table) and you really want the Redis stream IDs to match the one of this other system. Capped streams -------------- `XADD` incorporates the same semantics as the [`XTRIM`](../xtrim) command - refer to its documentation page for more information. This allows adding new entries and keeping the stream's size in check with a single call to `XADD`, effectively capping the stream with an arbitrary threshold. Although exact trimming is possible and is the default, due to the internal representation of steams it is more efficient to add an entry and trim stream with `XADD` using **almost exact** trimming (the `~` argument). For example, calling `XADD` in the following form: ``` XADD mystream MAXLEN ~ 1000 * ... entry fields here ... ``` Will add a new entry but will also evict old entries so that the stream will contain only 1000 entries, or at most a few tens more. Additional information about streams ------------------------------------ For further information about Redis streams please check our [introduction to Redis Streams document](https://redis.io/topics/streams-intro). Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings), specifically: The command returns the ID of the added entry. The ID is the one auto-generated if `*` is passed as ID argument, otherwise the command just returns the same ID specified by the user during insertion. The command returns a [Null reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings) when used with the `NOMKSTREAM` option and the key doesn't exist. Examples -------- ``` XADD mystream * name Sara surname OConnor XADD mystream * field1 value1 field2 value2 field3 value3 XLEN mystream XRANGE mystream - + ``` History ------- * Starting with Redis version 6.2.0: Added the `NOMKSTREAM` option, `MINID` trimming strategy and the `LIMIT` option. * Starting with Redis version 7.0.0: Added support for the `<ms>-*` explicit ID form. redis TOPK.QUERY TOPK.QUERY ========== ``` TOPK.QUERY ``` Syntax ``` TOPK.QUERY key item [item ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(n) where n is the number of items Checks whether an item is one of Top-K items. Multiple items can be checked at once. ### Parameters * **key**: Name of sketch where item is queried. * **item**: Item/s to be queried. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - "1" if item is in Top-K, otherwise "0". Examples -------- ``` redis> TOPK.QUERY topk 42 nonexist 1) (integer) 1 2) (integer) 0 ``` redis INFO INFO ==== ``` INFO ``` Syntax ``` INFO [section [section ...]] ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@slow`, `@dangerous`, The `INFO` command returns information and statistics about the server in a format that is simple to parse by computers and easy to read by humans. The optional parameter can be used to select a specific section of information: * `server`: General information about the Redis server * `clients`: Client connections section * `memory`: Memory consumption related information * `persistence`: RDB and AOF related information * `stats`: General statistics * `replication`: Master/replica replication information * `cpu`: CPU consumption statistics * `commandstats`: Redis command statistics * `latencystats`: Redis command latency percentile distribution statistics * `sentinel`: Redis Sentinel section (only applicable to Sentinel instances) * `cluster`: Redis Cluster section * `modules`: Modules section * `keyspace`: Database related statistics * `modules`: Module related sections * `errorstats`: Redis error statistics It can also take the following values: * `all`: Return all sections (excluding module generated ones) * `default`: Return only the default set of sections * `everything`: Includes `all` and `modules` When no parameter is provided, the `default` option is assumed. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): as a collection of text lines. Lines can contain a section name (starting with a # character) or a property. All the properties are in the form of `field:value` terminated by `\r\n`. ``` INFO ``` Notes ----- Please note depending on the version of Redis some of the fields have been added or removed. A robust client application should therefore parse the result of this command by skipping unknown properties, and gracefully handle missing fields. Here is the description of fields for Redis >= 2.4. Here is the meaning of all fields in the **server** section: * `redis_version`: Version of the Redis server * `redis_git_sha1`: Git SHA1 * `redis_git_dirty`: Git dirty flag * `redis_build_id`: The build id * `redis_mode`: The server's mode ("standalone", "sentinel" or "cluster") * `os`: Operating system hosting the Redis server * `arch_bits`: Architecture (32 or 64 bits) * `multiplexing_api`: Event loop mechanism used by Redis * `atomicvar_api`: Atomicvar API used by Redis * `gcc_version`: Version of the GCC compiler used to compile the Redis server * `process_id`: PID of the server process * `process_supervised`: Supervised system ("upstart", "systemd", "unknown" or "no") * `run_id`: Random value identifying the Redis server (to be used by Sentinel and Cluster) * `tcp_port`: TCP/IP listen port * `server_time_usec`: Epoch-based system time with microsecond precision * `uptime_in_seconds`: Number of seconds since Redis server start * `uptime_in_days`: Same value expressed in days * `hz`: The server's current frequency setting * `configured_hz`: The server's configured frequency setting * `lru_clock`: Clock incrementing every minute, for LRU management * `executable`: The path to the server's executable * `config_file`: The path to the config file * `io_threads_active`: Flag indicating if I/O threads are active * `shutdown_in_milliseconds`: The maximum time remaining for replicas to catch up the replication before completing the shutdown sequence. This field is only present during shutdown. Here is the meaning of all fields in the **clients** section: * `connected_clients`: Number of client connections (excluding connections from replicas) * `cluster_connections`: An approximation of the number of sockets used by the cluster's bus * `maxclients`: The value of the `maxclients` configuration directive. This is the upper limit for the sum of `connected_clients`, `connected_slaves` and `cluster_connections`. * `client_recent_max_input_buffer`: Biggest input buffer among current client connections * `client_recent_max_output_buffer`: Biggest output buffer among current client connections * `blocked_clients`: Number of clients pending on a blocking call ([`BLPOP`](../blpop), [`BRPOP`](../brpop), [`BRPOPLPUSH`](../brpoplpush), [`BLMOVE`](../blmove), [`BZPOPMIN`](../bzpopmin), [`BZPOPMAX`](../bzpopmax)) * `tracking_clients`: Number of clients being tracked ([`CLIENT TRACKING`](../client-tracking)) * `clients_in_timeout_table`: Number of clients in the clients timeout table Here is the meaning of all fields in the **memory** section: * `used_memory`: Total number of bytes allocated by Redis using its allocator (either standard **libc**, **jemalloc**, or an alternative allocator such as [**tcmalloc**](http://code.google.com/p/google-perftools/)) * `used_memory_human`: Human readable representation of previous value * `used_memory_rss`: Number of bytes that Redis allocated as seen by the operating system (a.k.a resident set size). This is the number reported by tools such as `top(1)` and `ps(1)` * `used_memory_rss_human`: Human readable representation of previous value * `used_memory_peak`: Peak memory consumed by Redis (in bytes) * `used_memory_peak_human`: Human readable representation of previous value * `used_memory_peak_perc`: The percentage of `used_memory_peak` out of `used_memory` * `used_memory_overhead`: The sum in bytes of all overheads that the server allocated for managing its internal data structures * `used_memory_startup`: Initial amount of memory consumed by Redis at startup in bytes * `used_memory_dataset`: The size in bytes of the dataset (`used_memory_overhead` subtracted from `used_memory`) * `used_memory_dataset_perc`: The percentage of `used_memory_dataset` out of the net memory usage (`used_memory` minus `used_memory_startup`) * `total_system_memory`: The total amount of memory that the Redis host has * `total_system_memory_human`: Human readable representation of previous value * `used_memory_lua`: Number of bytes used by the Lua engine * `used_memory_lua_human`: Human readable representation of previous value * `used_memory_scripts`: Number of bytes used by cached Lua scripts * `used_memory_scripts_human`: Human readable representation of previous value * `maxmemory`: The value of the `maxmemory` configuration directive * `maxmemory_human`: Human readable representation of previous value * `maxmemory_policy`: The value of the `maxmemory-policy` configuration directive * `mem_fragmentation_ratio`: Ratio between `used_memory_rss` and `used_memory`. Note that this doesn't only includes fragmentation, but also other process overheads (see the `allocator_*` metrics), and also overheads like code, shared libraries, stack, etc. * `mem_fragmentation_bytes`: Delta between `used_memory_rss` and `used_memory`. Note that when the total fragmentation bytes is low (few megabytes), a high ratio (e.g. 1.5 and above) is not an indication of an issue. * `allocator_frag_ratio:`: Ratio between `allocator_active` and `allocator_allocated`. This is the true (external) fragmentation metric (not `mem_fragmentation_ratio`). * `allocator_frag_bytes` Delta between `allocator_active` and `allocator_allocated`. See note about `mem_fragmentation_bytes`. * `allocator_rss_ratio`: Ratio between `allocator_resident` and `allocator_active`. This usually indicates pages that the allocator can and probably will soon release back to the OS. * `allocator_rss_bytes`: Delta between `allocator_resident` and `allocator_active` * `rss_overhead_ratio`: Ratio between `used_memory_rss` (the process RSS) and `allocator_resident`. This includes RSS overheads that are not allocator or heap related. * `rss_overhead_bytes`: Delta between `used_memory_rss` (the process RSS) and `allocator_resident` * `allocator_allocated`: Total bytes allocated form the allocator, including internal-fragmentation. Normally the same as `used_memory`. * `allocator_active`: Total bytes in the allocator active pages, this includes external-fragmentation. * `allocator_resident`: Total bytes resident (RSS) in the allocator, this includes pages that can be released to the OS (by [`MEMORY PURGE`](../memory-purge), or just waiting). * `mem_not_counted_for_evict`: Used memory that's not counted for key eviction. This is basically transient replica and AOF buffers. * `mem_clients_slaves`: Memory used by replica clients - Starting Redis 7.0, replica buffers share memory with the replication backlog, so this field can show 0 when replicas don't trigger an increase of memory usage. * `mem_clients_normal`: Memory used by normal clients * `mem_cluster_links`: Memory used by links to peers on the cluster bus when cluster mode is enabled. * `mem_aof_buffer`: Transient memory used for AOF and AOF rewrite buffers * `mem_replication_backlog`: Memory used by replication backlog * `mem_total_replication_buffers`: Total memory consumed for replication buffers - Added in Redis 7.0. * `mem_allocator`: Memory allocator, chosen at compile time. * `active_defrag_running`: When `activedefrag` is enabled, this indicates whether defragmentation is currently active, and the CPU percentage it intends to utilize. * `lazyfree_pending_objects`: The number of objects waiting to be freed (as a result of calling [`UNLINK`](../unlink), or [`FLUSHDB`](../flushdb) and [`FLUSHALL`](../flushall) with the **ASYNC** option) * `lazyfreed_objects`: The number of objects that have been lazy freed. Ideally, the `used_memory_rss` value should be only slightly higher than `used_memory`. When rss >> used, a large difference may mean there is (external) memory fragmentation, which can be evaluated by checking `allocator_frag_ratio`, `allocator_frag_bytes`. When used >> rss, it means part of Redis memory has been swapped off by the operating system: expect some significant latencies. Because Redis does not have control over how its allocations are mapped to memory pages, high `used_memory_rss` is often the result of a spike in memory usage. When Redis frees memory, the memory is given back to the allocator, and the allocator may or may not give the memory back to the system. There may be a discrepancy between the `used_memory` value and memory consumption as reported by the operating system. It may be due to the fact memory has been used and released by Redis, but not given back to the system. The `used_memory_peak` value is generally useful to check this point. Additional introspective information about the server's memory can be obtained by referring to the [`MEMORY STATS`](../memory-stats) command and the [`MEMORY DOCTOR`](../memory-doctor). Here is the meaning of all fields in the **persistence** section: * `loading`: Flag indicating if the load of a dump file is on-going * `async_loading`: Currently loading replication data-set asynchronously while serving old data. This means `repl-diskless-load` is enabled and set to `swapdb`. Added in Redis 7.0. * `current_cow_peak`: The peak size in bytes of copy-on-write memory while a child fork is running * `current_cow_size`: The size in bytes of copy-on-write memory while a child fork is running * `current_cow_size_age`: The age, in seconds, of the `current_cow_size` value. * `current_fork_perc`: The percentage of progress of the current fork process. For AOF and RDB forks it is the percentage of `current_save_keys_processed` out of `current_save_keys_total`. * `current_save_keys_processed`: Number of keys processed by the current save operation * `current_save_keys_total`: Number of keys at the beginning of the current save operation * `rdb_changes_since_last_save`: Number of changes since the last dump * `rdb_bgsave_in_progress`: Flag indicating a RDB save is on-going * `rdb_last_save_time`: Epoch-based timestamp of last successful RDB save * `rdb_last_bgsave_status`: Status of the last RDB save operation * `rdb_last_bgsave_time_sec`: Duration of the last RDB save operation in seconds * `rdb_current_bgsave_time_sec`: Duration of the on-going RDB save operation if any * `rdb_last_cow_size`: The size in bytes of copy-on-write memory during the last RDB save operation * `rdb_last_load_keys_expired`: Number of volatile keys deleted during the last RDB loading. Added in Redis 7.0. * `rdb_last_load_keys_loaded`: Number of keys loaded during the last RDB loading. Added in Redis 7.0. * `aof_enabled`: Flag indicating AOF logging is activated * `aof_rewrite_in_progress`: Flag indicating a AOF rewrite operation is on-going * `aof_rewrite_scheduled`: Flag indicating an AOF rewrite operation will be scheduled once the on-going RDB save is complete. * `aof_last_rewrite_time_sec`: Duration of the last AOF rewrite operation in seconds * `aof_current_rewrite_time_sec`: Duration of the on-going AOF rewrite operation if any * `aof_last_bgrewrite_status`: Status of the last AOF rewrite operation * `aof_last_write_status`: Status of the last write operation to the AOF * `aof_last_cow_size`: The size in bytes of copy-on-write memory during the last AOF rewrite operation * `module_fork_in_progress`: Flag indicating a module fork is on-going * `module_fork_last_cow_size`: The size in bytes of copy-on-write memory during the last module fork operation * `aof_rewrites`: Number of AOF rewrites performed since startup * `rdb_saves`: Number of RDB snapshots performed since startup `rdb_changes_since_last_save` refers to the number of operations that produced some kind of changes in the dataset since the last time either [`SAVE`](../save) or [`BGSAVE`](../bgsave) was called. If AOF is activated, these additional fields will be added: * `aof_current_size`: AOF current file size * `aof_base_size`: AOF file size on latest startup or rewrite * `aof_pending_rewrite`: Flag indicating an AOF rewrite operation will be scheduled once the on-going RDB save is complete. * `aof_buffer_length`: Size of the AOF buffer * `aof_rewrite_buffer_length`: Size of the AOF rewrite buffer. Note this field was removed in Redis 7.0 * `aof_pending_bio_fsync`: Number of fsync pending jobs in background I/O queue * `aof_delayed_fsync`: Delayed fsync counter If a load operation is on-going, these additional fields will be added: * `loading_start_time`: Epoch-based timestamp of the start of the load operation * `loading_total_bytes`: Total file size * `loading_rdb_used_mem`: The memory usage of the server that had generated the RDB file at the time of the file's creation * `loading_loaded_bytes`: Number of bytes already loaded * `loading_loaded_perc`: Same value expressed as a percentage * `loading_eta_seconds`: ETA in seconds for the load to be complete Here is the meaning of all fields in the **stats** section: * `total_connections_received`: Total number of connections accepted by the server * `total_commands_processed`: Total number of commands processed by the server * `instantaneous_ops_per_sec`: Number of commands processed per second * `total_net_input_bytes`: The total number of bytes read from the network * `total_net_output_bytes`: The total number of bytes written to the network * `total_net_repl_input_bytes`: The total number of bytes read from the network for replication purposes * `total_net_repl_output_bytes`: The total number of bytes written to the network for replication purposes * `instantaneous_input_kbps`: The network's read rate per second in KB/sec * `instantaneous_output_kbps`: The network's write rate per second in KB/sec * `instantaneous_input_repl_kbps`: The network's read rate per second in KB/sec for replication purposes * `instantaneous_output_repl_kbps`: The network's write rate per second in KB/sec for replication purposes * `rejected_connections`: Number of connections rejected because of `maxclients` limit * `sync_full`: The number of full resyncs with replicas * `sync_partial_ok`: The number of accepted partial resync requests * `sync_partial_err`: The number of denied partial resync requests * `expired_keys`: Total number of key expiration events * `expired_stale_perc`: The percentage of keys probably expired * `expired_time_cap_reached_count`: The count of times that active expiry cycles have stopped early * `expire_cycle_cpu_milliseconds`: The cumulative amount of time spend on active expiry cycles * `evicted_keys`: Number of evicted keys due to `maxmemory` limit * `evicted_clients`: Number of evicted clients due to `maxmemory-clients` limit. Added in Redis 7.0. * `total_eviction_exceeded_time`: Total time `used_memory` was greater than `maxmemory` since server startup, in milliseconds * `current_eviction_exceeded_time`: The time passed since `used_memory` last rose above `maxmemory`, in milliseconds * `keyspace_hits`: Number of successful lookup of keys in the main dictionary * `keyspace_misses`: Number of failed lookup of keys in the main dictionary * `pubsub_channels`: Global number of pub/sub channels with client subscriptions * `pubsub_patterns`: Global number of pub/sub pattern with client subscriptions * `pubsubshard_channels`: Global number of pub/sub shard channels with client subscriptions. Added in Redis 7.0.3 * `latest_fork_usec`: Duration of the latest fork operation in microseconds * `total_forks`: Total number of fork operations since the server start * `migrate_cached_sockets`: The number of sockets open for [`MIGRATE`](../migrate) purposes * `slave_expires_tracked_keys`: The number of keys tracked for expiry purposes (applicable only to writable replicas) * `active_defrag_hits`: Number of value reallocations performed by active the defragmentation process * `active_defrag_misses`: Number of aborted value reallocations started by the active defragmentation process * `active_defrag_key_hits`: Number of keys that were actively defragmented * `active_defrag_key_misses`: Number of keys that were skipped by the active defragmentation process * `total_active_defrag_time`: Total time memory fragmentation was over the limit, in milliseconds * `current_active_defrag_time`: The time passed since memory fragmentation last was over the limit, in milliseconds * `tracking_total_keys`: Number of keys being tracked by the server * `tracking_total_items`: Number of items, that is the sum of clients number for each key, that are being tracked * `tracking_total_prefixes`: Number of tracked prefixes in server's prefix table (only applicable for broadcast mode) * `unexpected_error_replies`: Number of unexpected error replies, that are types of errors from an AOF load or replication * `total_error_replies`: Total number of issued error replies, that is the sum of rejected commands (errors prior command execution) and failed commands (errors within the command execution) * `dump_payload_sanitizations`: Total number of dump payload deep integrity validations (see `sanitize-dump-payload` config). * `total_reads_processed`: Total number of read events processed * `total_writes_processed`: Total number of write events processed * `io_threaded_reads_processed`: Number of read events processed by the main and I/O threads * `io_threaded_writes_processed`: Number of write events processed by the main and I/O threads * `acl_access_denied_auth`: Number of authentication failures * `acl_access_denied_cmd`: Number of commands rejected because of access denied to the command * `acl_access_denied_key`: Number of commands rejected because of access denied to a key * `acl_access_denied_channel`: Number of commands rejected because of access denied to a channel Here is the meaning of all fields in the **replication** section: * `role`: Value is "master" if the instance is replica of no one, or "slave" if the instance is a replica of some master instance. Note that a replica can be master of another replica (chained replication). * `master_failover_state`: The state of an ongoing failover, if any. * `master_replid`: The replication ID of the Redis server. * `master_replid2`: The secondary replication ID, used for PSYNC after a failover. * `master_repl_offset`: The server's current replication offset * `second_repl_offset`: The offset up to which replication IDs are accepted * `repl_backlog_active`: Flag indicating replication backlog is active * `repl_backlog_size`: Total size in bytes of the replication backlog buffer * `repl_backlog_first_byte_offset`: The master offset of the replication backlog buffer * `repl_backlog_histlen`: Size in bytes of the data in the replication backlog buffer If the instance is a replica, these additional fields are provided: * `master_host`: Host or IP address of the master * `master_port`: Master listening TCP port * `master_link_status`: Status of the link (up/down) * `master_last_io_seconds_ago`: Number of seconds since the last interaction with master * `master_sync_in_progress`: Indicate the master is syncing to the replica * `slave_read_repl_offset`: The read replication offset of the replica instance. * `slave_repl_offset`: The replication offset of the replica instance * `slave_priority`: The priority of the instance as a candidate for failover * `slave_read_only`: Flag indicating if the replica is read-only * `replica_announced`: Flag indicating if the replica is announced by Sentinel. If a SYNC operation is on-going, these additional fields are provided: * `master_sync_total_bytes`: Total number of bytes that need to be transferred. this may be 0 when the size is unknown (for example, when the `repl-diskless-sync` configuration directive is used) * `master_sync_read_bytes`: Number of bytes already transferred * `master_sync_left_bytes`: Number of bytes left before syncing is complete (may be negative when `master_sync_total_bytes` is 0) * `master_sync_perc`: The percentage `master_sync_read_bytes` from `master_sync_total_bytes`, or an approximation that uses `loading_rdb_used_mem` when `master_sync_total_bytes` is 0 * `master_sync_last_io_seconds_ago`: Number of seconds since last transfer I/O during a SYNC operation If the link between master and replica is down, an additional field is provided: * `master_link_down_since_seconds`: Number of seconds since the link is down The following field is always provided: * `connected_slaves`: Number of connected replicas If the server is configured with the `min-slaves-to-write` (or starting with Redis 5 with the `min-replicas-to-write`) directive, an additional field is provided: * `min_slaves_good_slaves`: Number of replicas currently considered good For each replica, the following line is added: * `slaveXXX`: id, IP address, port, state, offset, lag Here is the meaning of all fields in the **cpu** section: * `used_cpu_sys`: System CPU consumed by the Redis server, which is the sum of system CPU consumed by all threads of the server process (main thread and background threads) * `used_cpu_user`: User CPU consumed by the Redis server, which is the sum of user CPU consumed by all threads of the server process (main thread and background threads) * `used_cpu_sys_children`: System CPU consumed by the background processes * `used_cpu_user_children`: User CPU consumed by the background processes * `used_cpu_sys_main_thread`: System CPU consumed by the Redis server main thread * `used_cpu_user_main_thread`: User CPU consumed by the Redis server main thread The **commandstats** section provides statistics based on the command type, including the number of calls that reached command execution (not rejected), the total CPU time consumed by these commands, the average CPU consumed per command execution, the number of rejected calls (errors prior command execution), and the number of failed calls (errors within the command execution). For each command type, the following line is added: * `cmdstat_XXX`: `calls=XXX,usec=XXX,usec_per_call=XXX,rejected_calls=XXX,failed_calls=XXX` The **latencystats** section provides latency percentile distribution statistics based on the command type. By default, the exported latency percentiles are the p50, p99, and p999. If you need to change the exported percentiles, use `CONFIG SET latency-tracking-info-percentiles "50.0 99.0 99.9"`. This section requires the extended latency monitoring feature to be enabled (by default it's enabled). If you need to enable it, use `CONFIG SET latency-tracking yes`. For each command type, the following line is added: * `latency_percentiles_usec_XXX: p<percentile 1>=<percentile 1 value>,p<percentile 2>=<percentile 2 value>,...` The **errorstats** section enables keeping track of the different errors that occurred within Redis, based upon the reply error prefix ( The first word after the "-", up to the first space. Example: `ERR` ). For each error type, the following line is added: * `errorstat_XXX`: `count=XXX` The **sentinel** section is only available in Redis Sentinel instances. It consists of the following fields: * `sentinel_masters`: Number of Redis masters monitored by this Sentinel instance * `sentinel_tilt`: A value of 1 means this sentinel is in TILT mode * `sentinel_tilt_since_seconds`: Duration in seconds of current TILT, or -1 if not TILTed. Added in Redis 7.0.0 * `sentinel_running_scripts`: The number of scripts this Sentinel is currently executing * `sentinel_scripts_queue_length`: The length of the queue of user scripts that are pending execution * `sentinel_simulate_failure_flags`: Flags for the `SENTINEL SIMULATE-FAILURE` command The **cluster** section currently only contains a unique field: * `cluster_enabled`: Indicate Redis cluster is enabled The **modules** section contains additional information about loaded modules if the modules provide it. The field part of properties lines in this section is always prefixed with the module's name. The **keyspace** section provides statistics on the main dictionary of each database. The statistics are the number of keys, and the number of keys with an expiration. For each database, the following line is added: * `dbXXX`: `keys=XXX,expires=XXX` **A note about the word slave used in this man page**: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated. **Modules generated sections**: Starting with Redis 6, modules can inject their info into the `INFO` command, these are excluded by default even when the `all` argument is provided (it will include a list of loaded modules but not their generated info fields). To get these you must use either the `modules` argument or `everything`., History ------- * Starting with Redis version 7.0.0: Added support for taking multiple section arguments.
programming_docs
redis HKEYS HKEYS ===== ``` HKEYS ``` Syntax ``` HKEYS key ``` Available since: 2.0.0 Time complexity: O(N) where N is the size of the hash. ACL categories: `@read`, `@hash`, `@slow`, Returns all field names in the hash stored at `key`. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of fields in the hash, or an empty list when `key` does not exist. Examples -------- ``` HSET myhash field1 "Hello" HSET myhash field2 "World" HKEYS myhash ``` redis LPOP LPOP ==== ``` LPOP ``` Syntax ``` LPOP key [count] ``` Available since: 1.0.0 Time complexity: O(N) where N is the number of elements returned ACL categories: `@write`, `@list`, `@fast`, Removes and returns the first elements of the list stored at `key`. By default, the command pops a single element from the beginning of the list. When provided with the optional `count` argument, the reply will consist of up to `count` elements, depending on the list's length. Return ------ When called without the `count` argument: [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the value of the first element, or `nil` when `key` does not exist. When called with the `count` argument: [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of popped elements, or `nil` when `key` does not exist. Examples -------- ``` RPUSH mylist "one" "two" "three" "four" "five" LPOP mylist LPOP mylist 2 LRANGE mylist 0 -1 ``` History ------- * Starting with Redis version 6.2.0: Added the `count` argument. redis TOPK.ADD TOPK.ADD ======== ``` TOPK.ADD ``` Syntax ``` TOPK.ADD key items [items ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(n \* k) where n is the number of items and k is the depth Adds an item to the data structure. Multiple items can be added at once. If an item enters the Top-K list, the item which is expelled is returned. This allows dynamic heavy-hitter detection of items being entered or expelled from Top-K list. ### Parameters * **key**: Name of sketch where item is added. * **item**: Item/s to be added. ### Return [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) - if an element was dropped from the TopK list, [Null reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings) otherwise.. #### Example ``` redis> TOPK.ADD topk foo bar 42 1) (nil) 2) baz 3) (nil) ``` redis EXPIRETIME EXPIRETIME ========== ``` EXPIRETIME ``` Syntax ``` EXPIRETIME key ``` Available since: 7.0.0 Time complexity: O(1) ACL categories: `@keyspace`, `@read`, `@fast`, Returns the absolute Unix timestamp (since January 1, 1970) in seconds at which the given key will expire. See also the [`PEXPIRETIME`](../pexpiretime) command which returns the same information with milliseconds resolution. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): Expiration Unix timestamp in seconds, or a negative value in order to signal an error (see the description below). * The command returns `-1` if the key exists but has no associated expiration time. * The command returns `-2` if the key does not exist. Examples -------- ``` SET mykey "Hello" EXPIREAT mykey 33177117420 EXPIRETIME mykey ``` redis BF.INFO BF.INFO ======= ``` BF.INFO ``` Syntax ``` BF.INFO key [CAPACITY | SIZE | FILTERS | ITEMS | EXPANSION] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(1) Return information about a Bloom filter. ### Parameters * **key**: key name for an existing Bloom filter. Optional parameters: * `CAPACITY` Number of unique items that can be stored in this Bloom filter before scaling would be required (including already added items) * `SIZE` Memory size: number of bytes allocated for this Bloom filter * `FILTERS` Number of sub-filters * `ITEMS` Number of items that were added to this Bloom filter and detected as unique (items that caused at least one bit to be set in at least one sub-filter) * `EXPANSION` Expansion rate When no optional parameter is specified: return all information fields. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) with information about the Bloom filter. Error when `key` does not exist. Error when `key` is of a type other than Bloom filter. Examples -------- ``` redis>BF.ADDbf1observation1(integer)1redis>BF.INFObf11)Capacity2)(integer)1003)Size4)(integer)2405)Numberoffilters6)(integer)17)Numberofitemsinserted8)(integer)19)Expansionrate10)(integer)2redis>BF.INFObf1CAPACITY1)(integer)100 ``` redis CONFIG CONFIG ====== ``` CONFIG GET ``` Syntax ``` CONFIG GET parameter [parameter ...] ``` Available since: 2.0.0 Time complexity: O(N) when N is the number of configuration parameters provided ACL categories: `@admin`, `@slow`, `@dangerous`, The `CONFIG GET` command is used to read the configuration parameters of a running Redis server. Not all the configuration parameters are supported in Redis 2.4, while Redis 2.6 can read the whole configuration of a server using this command. The symmetric command used to alter the configuration at run time is `CONFIG SET`. `CONFIG GET` takes multiple arguments, which are glob-style patterns. Any configuration parameter matching any of the patterns are reported as a list of key-value pairs. Example: ``` redis> config get *max-*-entries* maxmemory 1) "maxmemory" 2) "0" 3) "hash-max-listpack-entries" 4) "512" 5) "hash-max-ziplist-entries" 6) "512" 7) "set-max-intset-entries" 8) "512" 9) "zset-max-listpack-entries" 10) "128" 11) "zset-max-ziplist-entries" 12) "128" ``` You can obtain a list of all the supported configuration parameters by typing `CONFIG GET *` in an open `redis-cli` prompt. All the supported parameters have the same meaning of the equivalent configuration parameter used in the [redis.conf](http://github.com/redis/redis/raw/unstable/redis.conf) file: Note that you should look at the redis.conf file relevant to the version you're working with as configuration options might change between versions. The link above is to the latest development version. Return ------ The return type of the command is a [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays). History ------- * Starting with Redis version 7.0.0: Added the ability to pass multiple pattern parameters in one call redis SUBSCRIBE SUBSCRIBE ========= ``` SUBSCRIBE ``` Syntax ``` SUBSCRIBE channel [channel ...] ``` Available since: 2.0.0 Time complexity: O(N) where N is the number of channels to subscribe to. ACL categories: `@pubsub`, `@slow`, Subscribes the client to the specified channels. Once the client enters the subscribed state it is not supposed to issue any other commands, except for additional `SUBSCRIBE`, [`SSUBSCRIBE`](../ssubscribe), [`PSUBSCRIBE`](../psubscribe), [`UNSUBSCRIBE`](../unsubscribe), [`SUNSUBSCRIBE`](../sunsubscribe), [`PUNSUBSCRIBE`](../punsubscribe), [`PING`](../ping), [`RESET`](../reset) and [`QUIT`](../quit) commands. However, if RESP3 is used (see [`HELLO`](../hello)) it is possible for a client to issue any commands while in subscribed state. For more information, see [Pub/sub](https://redis.io/docs/manual/pubsub/). Return ------ When successful, this command doesn't return anything. Instead, for each channel, one message with the first element being the string "subscribe" is pushed as a confirmation that the command succeeded. Behavior change history ----------------------- * `>= 6.2.0`: [`RESET`](../reset) can be called to exit subscribed state. redis FLUSHALL FLUSHALL ======== ``` FLUSHALL ``` Syntax ``` FLUSHALL [ASYNC | SYNC] ``` Available since: 1.0.0 Time complexity: O(N) where N is the total number of keys in all databases ACL categories: `@keyspace`, `@write`, `@slow`, `@dangerous`, Delete all the keys of all the existing databases, not just the currently selected one. This command never fails. By default, `FLUSHALL` will synchronously flush all the databases. Starting with Redis 6.2, setting the **lazyfree-lazy-user-flush** configuration directive to "yes" changes the default flush mode to asynchronous. It is possible to use one of the following modifiers to dictate the flushing mode explicitly: * `ASYNC`: flushes the databases asynchronously * `SYNC`: flushes the databases synchronously Note: an asynchronous `FLUSHALL` command only deletes keys that were present at the time the command was invoked. Keys created during an asynchronous flush will be unaffected. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) Behavior change history ----------------------- * `>= 6.2.0`: Default flush behavior now configurable by the **lazyfree-lazy-user-flush** configuration directive. History ------- * Starting with Redis version 4.0.0: Added the `ASYNC` flushing mode modifier. * Starting with Redis version 6.2.0: Added the `SYNC` flushing mode modifier. redis TDIGEST.BYRANK TDIGEST.BYRANK ============== ``` TDIGEST.BYRANK ``` Syntax ``` TDIGEST.BYRANK key rank [rank ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.4.0](https://redis.io/docs/stack/bloom) Time complexity: O(1) Returns, for each input rank, an estimation of the value (floating-point) with that rank. Multiple estimations can be retrieved in a signle call. Required arguments ------------------ `key` is key name for an existing t-digest sketch. `rank` Rank, for which the value should be retrieved. 0 is the rank of the value of the smallest observation. *n*-1 is the rank of the value of the largest observation; *n* denotes the number of observations added to the sketch. Return value ------------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) - an array of floating-points populated with value\_1, value\_2, ..., value\_R: * Return an accurate result when `rank` is 0 (the value of the smallest observation) * Return an accurate result when `rank` is *n*-1 (the value of the largest observation), where *n* denotes the number of observations added to the sketch. * Return 'inf' when `rank` is equal to *n* or larger than *n* All values are 'nan' if the sketch is empty. Examples -------- ``` redis> TDIGEST.CREATE t COMPRESSION 1000 OK redis> TDIGEST.ADD t 1 2 2 3 3 3 4 4 4 4 5 5 5 5 5 OK redis> TDIGEST.BYRANK t 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1) "1" 2) "2" 3) "2" 4) "3" 5) "3" 6) "3" 7) "4" 8) "4" 9) "4" 10) "4" 11) "5" 12) "5" 13) "5" 14) "5" 15) "5" 16) "inf" ``` redis BRPOPLPUSH BRPOPLPUSH ========== ``` BRPOPLPUSH (deprecated) ``` As of Redis version 6.2.0, this command is regarded as deprecated. It can be replaced by [`BLMOVE`](../blmove) with the `RIGHT` and `LEFT` arguments when migrating or writing new code. Syntax ``` BRPOPLPUSH source destination timeout ``` Available since: 2.2.0 Time complexity: O(1) ACL categories: `@write`, `@list`, `@slow`, `@blocking`, `BRPOPLPUSH` is the blocking variant of [`RPOPLPUSH`](../rpoplpush). When `source` contains elements, this command behaves exactly like [`RPOPLPUSH`](../rpoplpush). When used inside a [`MULTI`](../multi)/[`EXEC`](../exec) block, this command behaves exactly like [`RPOPLPUSH`](../rpoplpush). When `source` is empty, Redis will block the connection until another client pushes to it or until `timeout` is reached. A `timeout` of zero can be used to block indefinitely. See [`RPOPLPUSH`](../rpoplpush) for more information. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the element being popped from `source` and pushed to `destination`. If `timeout` is reached, a [Null reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings) is returned. Pattern: Reliable queue ----------------------- Please see the pattern description in the [`RPOPLPUSH`](../rpoplpush) documentation. Pattern: Circular list ---------------------- Please see the pattern description in the [`RPOPLPUSH`](../rpoplpush) documentation. History ------- * Starting with Redis version 6.0.0: `timeout` is interpreted as a double instead of an integer. redis GRAPH.LIST GRAPH.LIST ========== ``` GRAPH.LIST ``` Syntax ``` GRAPH.LIST ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Graph 2.4.3](https://redis.io/docs/stack/graph) Time complexity: Lists all graph keys in the keyspace. ``` 127.0.0.1:6379> GRAPH.LIST 2) G 3) resources 4) players ``` redis TS.CREATERULE TS.CREATERULE ============= ``` TS.CREATERULE ``` Syntax ``` TS.CREATERULE sourceKey destKey AGGREGATION aggregator bucketDuration [alignTimestamp] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [TimeSeries 1.0.0](https://redis.io/docs/stack/timeseries) Time complexity: O(1) Create a compaction rule [Examples](#examples) Required arguments ------------------ `sourceKey` is key name for the source time series. `destKey` is key name for destination (compacted) time series. It must be created before `TS.CREATERULE` is called. `AGGREGATION aggregator bucketDuration` aggregates results into time buckets. * `aggregator` takes one of the following aggregation types: | `aggregator` | Description | | --- | --- | | `avg` | Arithmetic mean of all values | | `sum` | Sum of all values | | `min` | Minimum value | | `max` | Maximum value | | `range` | Difference between the highest and the lowest value | | `count` | Number of values | | `first` | Value with lowest timestamp in the bucket | | `last` | Value with highest timestamp in the bucket | | `std.p` | Population standard deviation of the values | | `std.s` | Sample standard deviation of the values | | `var.p` | Population variance of the values | | `var.s` | Sample variance of the values | | `twa` | Time-weighted average over the bucket's timeframe (since RedisTimeSeries v1.8) | * `bucketDuration` is duration of each bucket, in milliseconds. **Notes** * Only new samples that are added into the source series after the creation of the rule will be aggregated. * Calling `TS.CREATERULE` with a nonempty `destKey` may result in inconsistencies between the raw and the compacted data. * Explicitly adding samples to a compacted time series (using [`TS.ADD`](../ts.add), [`TS.MADD`](../ts.madd), [`TS.INCRBY`](../ts.incrby), or [`TS.DECRBY`](../ts.decrby)) may result in inconsistencies between the raw and the compacted data. The compaction process may override such samples. * If no samples are added to the source time series during a bucket period. no *compacted sample* is added to the destination time series. * The timestamp of a compacted sample added to the destination time series is set to the start timestamp the appropriate compaction bucket. For example, for a 10-minute compaction bucket with no alignment, the compacted samples timestamps are `x:00`, `x:10`, `x:20`, and so on. * Deleting `destKey` will cause the compaction rule to be deleted as well. * On a clustered environment, hash tags should be used to force `sourceKey` and `destKey` to be stored in the same hash slot. Optional arguments ------------------ `alignTimestamp` (since RedisTimeSeries v1.8) ensures that there is a bucket that starts exactly at `alignTimestamp` and aligns all other buckets accordingly. It is expressed in milliseconds. The default value is 0 aligned with the epoch. For example, if `bucketDuration` is 24 hours (`24 * 3600 * 1000`), setting `alignTimestamp` to 6 hours after the epoch (`6 * 3600 * 1000`) ensures that each bucket’s timeframe is `[06:00 .. 06:00)`. Return value ------------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) - `OK` if executed correctly, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) otherwise. Examples -------- **Create a compaction rule** Create a time series to store the temperatures measured in Tel Aviv. ``` 127.0.0.1:6379> TS.CREATE temp:TLV LABELS type temp location TLV OK ``` Next, create a compacted time series named *dailyAvgTemp* containing one compacted sample per 24 hours: the time-weighted average of all measurements taken from midnight to next midnight. ``` 127.0.0.1:6379> TS.CREATE dailyAvgTemp:TLV LABELS type temp location TLV 127.0.0.1:6379> TS.CREATERULE temp:TLV dailyAvgTemp:TLV AGGREGATION twa 86400000 ``` Now, also create a compacted time series named *dailyDiffTemp*. This time series will contain one compacted sample per 24 hours: the difference between the minimum and the maximum temperature measured between 06:00 and 06:00 next day. Here, 86400000 is the number of milliseconds in 24 hours, 21600000 is the number of milliseconds in 6 hours. ``` 127.0.0.1:6379> TS.CREATE dailyDiffTemp:TLV LABELS type temp location TLV 127.0.0.1:6379> TS.CREATERULE temp:TLV dailyDiffTemp:TLV AGGREGATION range 86400000 21600000 ``` See also -------- [`TS.DELETERULE`](../ts.deleterule) Related topics -------------- [RedisTimeSeries](https://redis.io/docs/stack/timeseries) redis CMS.INCRBY CMS.INCRBY ========== ``` CMS.INCRBY ``` Syntax ``` CMS.INCRBY key item increment [item increment ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(n) where n is the number of items Increases the count of item by increment. Multiple items can be increased with one call. ### Parameters: * **key**: The name of the sketch. * **item**: The item which counter is to be increased. * **increment**: Amount by which the item counter is to be increased. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) with an updated min-count of each of the items in the sketch. Count of each item after increment. Examples -------- ``` redis> CMS.INCRBY test foo 10 bar 42 1) (integer) 10 2) (integer) 42 ``` redis CLUSTER CLUSTER ======= ``` CLUSTER INFO ``` Syntax ``` CLUSTER INFO ``` Available since: 3.0.0 Time complexity: O(1) ACL categories: `@slow`, `CLUSTER INFO` provides [`INFO`](../info) style information about Redis Cluster vital parameters. The following fields are always present in the reply: ``` cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:2 cluster_stats_messages_sent:1483972 cluster_stats_messages_received:1483968 total_cluster_links_buffer_limit_exceeded:0 ``` * `cluster_state`: State is `ok` if the node is able to receive queries. `fail` if there is at least one hash slot which is unbound (no node associated), in error state (node serving it is flagged with FAIL flag), or if the majority of masters can't be reached by this node. * `cluster_slots_assigned`: Number of slots which are associated to some node (not unbound). This number should be 16384 for the node to work properly, which means that each hash slot should be mapped to a node. * `cluster_slots_ok`: Number of hash slots mapping to a node not in `FAIL` or `PFAIL` state. * `cluster_slots_pfail`: Number of hash slots mapping to a node in `PFAIL` state. Note that those hash slots still work correctly, as long as the `PFAIL` state is not promoted to `FAIL` by the failure detection algorithm. `PFAIL` only means that we are currently not able to talk with the node, but may be just a transient error. * `cluster_slots_fail`: Number of hash slots mapping to a node in `FAIL` state. If this number is not zero the node is not able to serve queries unless `cluster-require-full-coverage` is set to `no` in the configuration. * `cluster_known_nodes`: The total number of known nodes in the cluster, including nodes in `HANDSHAKE` state that may not currently be proper members of the cluster. * `cluster_size`: The number of master nodes serving at least one hash slot in the cluster. * `cluster_current_epoch`: The local `Current Epoch` variable. This is used in order to create unique increasing version numbers during fail overs. * `cluster_my_epoch`: The `Config Epoch` of the node we are talking with. This is the current configuration version assigned to this node. * `cluster_stats_messages_sent`: Number of messages sent via the cluster node-to-node binary bus. * `cluster_stats_messages_received`: Number of messages received via the cluster node-to-node binary bus. * `total_cluster_links_buffer_limit_exceeded`: Accumulated count of cluster links freed due to exceeding the `cluster-link-sendbuf-limit` configuration. The following message-related fields may be included in the reply if the value is not 0: Each message type includes statistics on the number of messages sent and received. Here are the explanation of these fields: * `cluster_stats_messages_ping_sent` and `cluster_stats_messages_ping_received`: Cluster bus PING (not to be confused with the client command [`PING`](../ping)). * `cluster_stats_messages_pong_sent` and `cluster_stats_messages_pong_received`: PONG (reply to PING). * `cluster_stats_messages_meet_sent` and `cluster_stats_messages_meet_received`: Handshake message sent to a new node, either through gossip or [`CLUSTER MEET`](../cluster-meet). * `cluster_stats_messages_fail_sent` and `cluster_stats_messages_fail_received`: Mark node xxx as failing. * `cluster_stats_messages_publish_sent` and `cluster_stats_messages_publish_received`: Pub/Sub Publish propagation, see [Pubsub](https://redis.io/topics/pubsub#pubsub). * `cluster_stats_messages_auth-req_sent` and `cluster_stats_messages_auth-req_received`: Replica initiated leader election to replace its master. * `cluster_stats_messages_auth-ack_sent` and `cluster_stats_messages_auth-ack_received`: Message indicating a vote during leader election. * `cluster_stats_messages_update_sent` and `cluster_stats_messages_update_received`: Another node slots configuration. * `cluster_stats_messages_mfstart_sent` and `cluster_stats_messages_mfstart_received`: Pause clients for manual failover. * `cluster_stats_messages_module_sent` and `cluster_stats_messages_module_received`: Module cluster API message. * `cluster_stats_messages_publishshard_sent` and `cluster_stats_messages_publishshard_received`: Pub/Sub Publish shard propagation, see [Sharded Pubsub](https://redis.io/topics/pubsub#sharded-pubsub). More information about the Current Epoch and Config Epoch variables are available in the [Redis Cluster specification document](https://redis.io/topics/cluster-spec#cluster-current-epoch). Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): A map between named fields and values in the form of `<field>:<value>` lines separated by newlines composed by the two bytes `CRLF`.
programming_docs
redis BITFIELD BITFIELD ======== ``` BITFIELD ``` Syntax ``` BITFIELD key [GET encoding offset | [OVERFLOW <WRAP | SAT | FAIL>] <SET encoding offset value | INCRBY encoding offset increment> [GET encoding offset | [OVERFLOW <WRAP | SAT | FAIL>] <SET encoding offset value | INCRBY encoding offset increment> ...]] ``` Available since: 3.2.0 Time complexity: O(1) for each subcommand specified ACL categories: `@write`, `@bitmap`, `@slow`, The command treats a Redis string as an array of bits, and is capable of addressing specific integer fields of varying bit widths and arbitrary non (necessary) aligned offset. In practical terms using this command you can set, for example, a signed 5 bits integer at bit offset 1234 to a specific value, retrieve a 31 bit unsigned integer from offset 4567. Similarly the command handles increments and decrements of the specified integers, providing guaranteed and well specified overflow and underflow behavior that the user can configure. `BITFIELD` is able to operate with multiple bit fields in the same command call. It takes a list of operations to perform, and returns an array of replies, where each array matches the corresponding operation in the list of arguments. For example the following command increments a 5 bit signed integer at bit offset 100, and gets the value of the 4 bit unsigned integer at bit offset 0: ``` > BITFIELD mykey INCRBY i5 100 1 GET u4 0 1) (integer) 1 2) (integer) 0 ``` Note that: 1. Addressing with `GET` bits outside the current string length (including the case the key does not exist at all), results in the operation to be performed like the missing part all consists of bits set to 0. 2. Addressing with `SET` or `INCRBY` bits outside the current string length will enlarge the string, zero-padding it, as needed, for the minimal length needed, according to the most far bit touched. Supported subcommands and integer encoding ------------------------------------------ The following is the list of supported commands. * **GET** `<encoding>` `<offset>` -- Returns the specified bit field. * **SET** `<encoding>` `<offset>` `<value>` -- Set the specified bit field and returns its old value. * **INCRBY** `<encoding>` `<offset>` `<increment>` -- Increments or decrements (if a negative increment is given) the specified bit field and returns the new value. There is another subcommand that only changes the behavior of successive `INCRBY` and `SET` subcommands calls by setting the overflow behavior: * **OVERFLOW** `[WRAP|SAT|FAIL]` Where an integer encoding is expected, it can be composed by prefixing with `i` for signed integers and `u` for unsigned integers with the number of bits of our integer encoding. So for example `u8` is an unsigned integer of 8 bits and `i16` is a signed integer of 16 bits. The supported encodings are up to 64 bits for signed integers, and up to 63 bits for unsigned integers. This limitation with unsigned integers is due to the fact that currently the Redis protocol is unable to return 64 bit unsigned integers as replies. Bits and positional offsets --------------------------- There are two ways in order to specify offsets in the bitfield command. If a number without any prefix is specified, it is used just as a zero based bit offset inside the string. However if the offset is prefixed with a `#` character, the specified offset is multiplied by the integer encoding's width, so for example: ``` BITFIELD mystring SET i8 #0 100 SET i8 #1 200 ``` Will set the first i8 integer at offset 0 and the second at offset 8. This way you don't have to do the math yourself inside your client if what you want is a plain array of integers of a given size. Overflow control ---------------- Using the `OVERFLOW` command the user is able to fine-tune the behavior of the increment or decrement overflow (or underflow) by specifying one of the following behaviors: * **WRAP**: wrap around, both with signed and unsigned integers. In the case of unsigned integers, wrapping is like performing the operation modulo the maximum value the integer can contain (the C standard behavior). With signed integers instead wrapping means that overflows restart towards the most negative value and underflows towards the most positive ones, so for example if an `i8` integer is set to the value 127, incrementing it by 1 will yield `-128`. * **SAT**: uses saturation arithmetic, that is, on underflows the value is set to the minimum integer value, and on overflows to the maximum integer value. For example incrementing an `i8` integer starting from value 120 with an increment of 10, will result into the value 127, and further increments will always keep the value at 127. The same happens on underflows, but towards the value is blocked at the most negative value. * **FAIL**: in this mode no operation is performed on overflows or underflows detected. The corresponding return value is set to NULL to signal the condition to the caller. Note that each `OVERFLOW` statement only affects the `INCRBY` and `SET` commands that follow it in the list of subcommands, up to the next `OVERFLOW` statement. By default, **WRAP** is used if not otherwise specified. ``` > BITFIELD mykey incrby u2 100 1 OVERFLOW SAT incrby u2 102 1 1) (integer) 1 2) (integer) 1 > BITFIELD mykey incrby u2 100 1 OVERFLOW SAT incrby u2 102 1 1) (integer) 2 2) (integer) 2 > BITFIELD mykey incrby u2 100 1 OVERFLOW SAT incrby u2 102 1 1) (integer) 3 2) (integer) 3 > BITFIELD mykey incrby u2 100 1 OVERFLOW SAT incrby u2 102 1 1) (integer) 0 2) (integer) 3 ``` Return value ------------ The command returns an array with each entry being the corresponding result of the sub command given at the same position. `OVERFLOW` subcommands don't count as generating a reply. The following is an example of `OVERFLOW FAIL` returning NULL. ``` > BITFIELD mykey OVERFLOW FAIL incrby u2 102 1 1) (nil) ``` Motivations ----------- The motivation for this command is that the ability to store many small integers as a single large bitmap (or segmented over a few keys to avoid having huge keys) is extremely memory efficient, and opens new use cases for Redis to be applied, especially in the field of real time analytics. This use cases are supported by the ability to specify the overflow in a controlled way. Fun fact: Reddit's 2017 April fools' project [r/place](https://reddit.com/r/place) was [built using the Redis BITFIELD command](https://redditblog.com/2017/04/13/how-we-built-rplace/) in order to take an in-memory representation of the collaborative canvas. Performance considerations -------------------------- Usually `BITFIELD` is a fast command, however note that addressing far bits of currently short strings will trigger an allocation that may be more costly than executing the command on bits already existing. Orders of bits -------------- The representation used by `BITFIELD` considers the bitmap as having the bit number 0 to be the most significant bit of the first byte, and so forth, so for example setting a 5 bits unsigned integer to value 23 at offset 7 into a bitmap previously set to all zeroes, will produce the following representation: ``` +--------+--------+ |00000001|01110000| +--------+--------+ ``` When offsets and integer sizes are aligned to bytes boundaries, this is the same as big endian, however when such alignment does not exist, its important to also understand how the bits inside a byte are ordered. redis CF.INSERT CF.INSERT ========= ``` CF.INSERT ``` Syntax ``` CF.INSERT key [CAPACITY capacity] [NOCREATE] ITEMS item [item ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(n \* (k + i)), where n is the number of items, k is the number of sub-filters and i is maxIterations Adds one or more items to a cuckoo filter, allowing the filter to be created with a custom capacity if it does not exist yet. These commands offers more flexibility over the `ADD` command, at the cost of more verbosity. ### Parameters * **key**: The name of the filter * **capacity**: Specifies the desired capacity of the new filter, if this filter does not exist yet. If the filter already exists, then this parameter is ignored. If the filter does not exist yet and this parameter is *not* specified, then the filter is created with the module-level default capacity which is 1024. See [`CF.RESERVE`](../cf.reserve) for more information on cuckoo filter capacities. * **NOCREATE**: If specified, prevents automatic filter creation if the filter does not exist. Instead, an error is returned if the filter does not already exist. This option is mutually exclusive with `CAPACITY`. * **item**: One or more items to add. The `ITEMS` keyword must precede the list of items to add. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - "1" if executed correctly, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) otherwise. Examples -------- ``` redis> CF.INSERT cf CAPACITY 1000 ITEMS item1 item2 1) (integer) 1 2) (integer) 1 ``` ``` redis> CF.INSERT cf1 CAPACITY 1000 NOCREATE ITEMS item1 item2 (error) ERR not found ``` redis PUBLISH PUBLISH ======= ``` PUBLISH ``` Syntax ``` PUBLISH channel message ``` Available since: 2.0.0 Time complexity: O(N+M) where N is the number of clients subscribed to the receiving channel and M is the total number of subscribed patterns (by any client). ACL categories: `@pubsub`, `@fast`, Posts a message to the given channel. In a Redis Cluster clients can publish to every node. The cluster makes sure that published messages are forwarded as needed, so clients can subscribe to any channel by connecting to any one of the nodes. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of clients that received the message. Note that in a Redis Cluster, only clients that are connected to the same node as the publishing client are included in the count. redis ZUNION ZUNION ====== ``` ZUNION ``` Syntax ``` ZUNION numkeys key [key ...] [WEIGHTS weight [weight ...]] [AGGREGATE <SUM | MIN | MAX>] [WITHSCORES] ``` Available since: 6.2.0 Time complexity: O(N)+O(M\*log(M)) with N being the sum of the sizes of the input sorted sets, and M being the number of elements in the resulting sorted set. ACL categories: `@read`, `@sortedset`, `@slow`, This command is similar to [`ZUNIONSTORE`](../zunionstore), but instead of storing the resulting sorted set, it is returned to the client. For a description of the `WEIGHTS` and `AGGREGATE` options, see [`ZUNIONSTORE`](../zunionstore). Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): the result of union (optionally with their scores, in case the `WITHSCORES` option is given). Examples -------- ``` ZADD zset1 1 "one" ZADD zset1 2 "two" ZADD zset2 1 "one" ZADD zset2 2 "two" ZADD zset2 3 "three" ZUNION 2 zset1 zset2 ZUNION 2 zset1 zset2 WITHSCORES ``` redis COMMAND COMMAND ======= ``` COMMAND INFO ``` Syntax ``` COMMAND INFO [command-name [command-name ...]] ``` Available since: 2.8.13 Time complexity: O(N) where N is the number of commands to look up ACL categories: `@slow`, `@connection`, Returns [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of details about multiple Redis commands. Same result format as [`COMMAND`](../command) except you can specify which commands get returned. If you request details about non-existing commands, their return position will be nil. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): nested list of command details. Examples -------- ``` COMMAND INFO get set eval COMMAND INFO foo evalsha config bar ``` History ------- * Starting with Redis version 7.0.0: Allowed to be called with no argument to get info on all commands. redis FT.SEARCH FT.SEARCH ========= ``` FT.SEARCH ``` Syntax ``` FT.SEARCH index query [NOCONTENT] [VERBATIM] [NOSTOPWORDS] [WITHSCORES] [WITHPAYLOADS] [WITHSORTKEYS] [FILTER numeric_field min max [ FILTER numeric_field min max ...]] [GEOFILTER geo_field lon lat radius m | km | mi | ft [ GEOFILTER geo_field lon lat radius m | km | mi | ft ...]] [INKEYS count key [key ...]] [ INFIELDS count field [field ...]] [RETURN count identifier [AS property] [ identifier [AS property] ...]] [SUMMARIZE [ FIELDS count field [field ...]] [FRAGS num] [LEN fragsize] [SEPARATOR separator]] [HIGHLIGHT [ FIELDS count field [field ...]] [ TAGS open close]] [SLOP slop] [TIMEOUT timeout] [INORDER] [LANGUAGE language] [EXPANDER expander] [SCORER scorer] [EXPLAINSCORE] [PAYLOAD payload] [SORTBY sortby [ ASC | DESC]] [LIMIT offset num] [PARAMS nargs name value [ name value ...]] [DIALECT dialect] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.0.0](https://redis.io/docs/stack/search) Time complexity: O(N) Search the index with a textual query, returning either documents or just ids [Examples](#examples) Required arguments ------------------ `index` is index name. You must first create the index using [`FT.CREATE`](../ft.create). `query` is text query to search. If it's more than a single word, put it in quotes. Refer to [Query syntax](https://redis.io/docs/stack/search/reference/query_syntax) for more details. Optional arguments ------------------ `NOCONTENT` returns the document ids and not the content. This is useful if RediSearch is only an index on an external document collection. `VERBATIM` does not try to use stemming for query expansion but searches the query terms verbatim. `WITHSCORES` also returns the relative internal score of each document. This can be used to merge results from multiple instances. `WITHPAYLOADS` retrieves optional document payloads. See [`FT.CREATE`](../ft.create). The payloads follow the document id and, if `WITHSCORES` is set, the scores. `WITHSORTKEYS` returns the value of the sorting key, right after the id and score and/or payload, if requested. This is usually not needed, and exists for distributed search coordination purposes. This option is relevant only if used in conjunction with `SORTBY`. `FILTER numeric_attribute min max` limits results to those having numeric values ranging between `min` and `max`, if numeric\_attribute is defined as a numeric attribute in [`FT.CREATE`](../ft.create). `min` and `max` follow [`ZRANGE`](../zrange) syntax, and can be `-inf`, `+inf`, and use `(` for exclusive ranges. Multiple numeric filters for different attributes are supported in one query. `GEOFILTER {geo_attribute} {lon} {lat} {radius} m|km|mi|ft` filter the results to a given `radius` from `lon` and `lat`. Radius is given as a number and units. See [`GEORADIUS`](../georadius) for more details. `INKEYS {num} {attribute} ...` limits the result to a given set of keys specified in the list. The first argument must be the length of the list and greater than zero. Non-existent keys are ignored, unless all the keys are non-existent. `INFIELDS {num} {attribute} ...` filters the results to those appearing only in specific attributes of the document, like `title` or `URL`. You must include `num`, which is the number of attributes you're filtering by. For example, if you request `title` and `URL`, then `num` is 2. `RETURN {num} {identifier} AS {property} ...` limits the attributes returned from the document. `num` is the number of attributes following the keyword. If `num` is 0, it acts like `NOCONTENT`. `identifier` is either an attribute name (for hashes and JSON) or a JSON Path expression (for JSON). `property` is an optional name used in the result. If not provided, the `identifier` is used in the result. `SUMMARIZE ...` returns only the sections of the attribute that contain the matched text. See [Highlighting](https://redis.io/docs/stack/search/reference/highlight) for more information. `HIGHLIGHT ...` formats occurrences of matched text. See [Highlighting](https://redis.io/docs/stack/search/reference/highlight) for more information. `SLOP {slop}` is the number of intermediate terms allowed to appear between the terms of the query. Suppose you're searching for a phrase *hello world*. If some terms appear in-between *hello* and *world*, a `SLOP` greater than `0` allows for these text attributes to match. By default, there is no `SLOP` constraint. `INORDER` requires the terms in the document to have the same order as the terms in the query, regardless of the offsets between them. Typically used in conjunction with `SLOP`. Default is `false`. `LANGUAGE {language}` use a stemmer for the supplied language during search for query expansion. If querying documents in Chinese, set to `chinese` to properly tokenize the query terms. Defaults to English. If an unsupported language is sent, the command returns an error. See [`FT.CREATE`](../ft.create) for the list of languages. `EXPANDER {expander}` uses a custom query expander instead of the stemmer. See [Extensions](https://redis.io/docs/stack/search/reference/extensions). `SCORER {scorer}` uses a custom scoring function you define. See [Extensions](https://redis.io/docs/stack/search/reference/extensions). `EXPLAINSCORE` returns a textual description of how the scores were calculated. Using this option requires `WITHSCORES`. `PAYLOAD {payload}` adds an arbitrary, binary safe payload that is exposed to custom scoring functions. See [Extensions](https://redis.io/docs/stack/search/reference/extensions). `SORTBY {attribute} [ASC|DESC]` orders the results by the value of this attribute. This applies to both text and numeric attributes. Attributes needed for `SORTBY` should be declared as `SORTABLE` in the index, in order to be available with very low latency. Note that this adds memory overhead. `LIMIT first num` limits the results to the offset and number of results given. Note that the offset is zero-indexed. The default is 0 10, which returns 10 items starting from the first result. You can use `LIMIT 0 0` to count the number of documents in the result set without actually returning them. `TIMEOUT {milliseconds}` overrides the timeout parameter of the module. `PARAMS {nargs} {name} {value}` defines one or more value parameters. Each parameter has a name and a value. You can reference parameters in the `query` by a `$`, followed by the parameter name, for example, `$user`. Each such reference in the search query to a parameter name is substituted by the corresponding parameter value. For example, with parameter definition `PARAMS 4 lon 29.69465 lat 34.95126`, the expression `@loc:[$lon $lat 10 km]` is evaluated to `@loc:[29.69465 34.95126 10 km]`. You cannot reference parameters in the query string where concrete values are not allowed, such as in field names, for example, `@loc`. To use `PARAMS`, set `DIALECT` to `2` or greater than `2`. `DIALECT {dialect_version}` selects the dialect version under which to execute the query. If not specified, the query will execute under the default dialect version set during module initial loading or via [`FT.CONFIG SET`](../ft.config-set) command. Return ------ FT.SEARCH returns an array reply, where the first element is an integer reply of the total number of results, and then array reply pairs of document ids, and array replies of attribute/value pairs. Notes * If `NOCONTENT` is given, an array is returned where the first element is the total number of results, and the rest of the members are document ids. * If a hash expires after the query process starts, the hash is counted in the total number of results, but the key name and content return as null. ### Return multiple values When the index is defined `ON JSON`, a reply for a single attribute or a single JSONPath may return multiple values when the JSONPath matches multiple values, or when the JSONPath matches an array. Prior to RediSearch v2.6, only the first of the matched values was returned. Starting with RediSearch v2.6, all values are returned, wrapped with a top-level array. In order to maintain backward compatibility, the default behavior with RediSearch v2.6 is to return only the first value. To return all the values, use `DIALECT` 3 (or greater, when available). The `DIALECT` can be specified as a parameter in the FT.SEARCH command. If it is not specified, the `DEFAULT_DIALECT` is used, which can be set using [`FT.CONFIG SET`](../ft.config-set) or by passing it as an argument to the `redisearch` module when it is loaded. For example, with the following document and index: ``` 127.0.0.1:6379> JSON.SET doc:1 $ '[{"arr": [1, 2, 3]}, {"val": "hello"}, {"val": "world"}]' OK 127.0.0.1:6379> FT.CREATE idx ON JSON PREFIX 1 doc: SCHEMA $..arr AS arr NUMERIC $..val AS val TEXT OK ``` Notice the different replies, with and without `DIALECT 3`: ``` 127.0.0.1:6379> FT.SEARCH idx \* RETURN 2 arr val 1) (integer) 1 2) "doc:1" 3) 1) "arr" 2) "[1,2,3]" 3) "val" 4) "hello" 127.0.0.1:6379> FT.SEARCH idx \* RETURN 2 arr val DIALECT 3 1) (integer) 1 2) "doc:1" 3) 1) "arr" 2) "[[1,2,3]]" 3) "val" 4) "[\"hello\",\"world\"]" ``` Complexity ---------- FT.SEARCH complexity is O(n) for single word queries. `n` is the number of the results in the result set. Finding all the documents that have a specific term is O(1), however, a scan on all those documents is needed to load the documents data from redis hashes and return them. The time complexity for more complex queries varies, but in general it's proportional to the number of words, the number of intersection points between them and the number of results in the result set. Examples -------- **Search for a term in every text attribute** Search for the term "wizard" in every TEXT attribute of an index containing book data. ``` 127.0.0.1:6379> FT.SEARCH books-idx "wizard" ``` **Search for a term in title attribute** Search for the term *dogs* in the `title` attribute. ``` 127.0.0.1:6379> FT.SEARCH books-idx "@title:dogs" ``` **Search for books from specific years** Search for books published in 2020 or 2021. ``` 127.0.0.1:6379> FT.SEARCH books-idx "@published\_at:[2020 2021]" ``` **Search for a restaurant by distance from longitude/latitude** Search for Chinese restaurants within 5 kilometers of longitude -122.41, latitude 37.77 (San Francisco). ``` 127.0.0.1:6379> FT.SEARCH restaurants-idx "chinese @location:[-122.41 37.77 5 km]" ``` **Search for a book by terms but boost specific term** Search for the term *dogs* or *cats* in the `title` attribute, but give matches of *dogs* a higher relevance score (also known as *boosting*). ``` 127.0.0.1:6379> FT.SEARCH books-idx "(@title:dogs | @title:cats) | (@title:dogs) => { $weight: 5.0; }" ``` **Search for a book by a term and EXPLAINSCORE** Search for books with *dogs* in any TEXT attribute in the index and request an explanation of scoring for each result. ``` 127.0.0.1:6379> FT.SEARCH books-idx "dogs" WITHSCORES EXPLAINSCORE ``` **Search for a book by a term and TAG** Search for books with *space* in the title that have `science` in the TAG attribute `categories`. ``` 127.0.0.1:6379> FT.SEARCH books-idx "@title:space @categories:{science}" ``` **Search for a book by a term but limit the number** Search for books with *Python* in any `TEXT` attribute, returning 10 results starting with the 11th result in the entire result set (the offset parameter is zero-based), and return only the `title` attribute for each result. ``` 127.0.0.1:6379> FT.SEARCH books-idx "python" LIMIT 10 10 RETURN 1 title ``` **Search for a book by a term and price** Search for books with *Python* in any `TEXT` attribute, returning the price stored in the original JSON document. ``` 127.0.0.1:6379> FT.SEARCH books-idx "python" RETURN 3 $.book.price AS price ``` **Search for a book by title and distance** Search for books with semantically similar title to *Planet Earth*. Return top 10 results sorted by distance. ``` 127.0.0.1:6379> FT.SEARCH books-idx "\*=>[KNN 10 @title\_embedding $query\_vec AS title\_score]" PARAMS 2 query\_vec <"Planet Earth" embedding BLOB> SORTBY title\_score DIALECT 2 ``` **Search for a phrase using SLOP** Search for a phrase *hello world*. First, create an index. ``` 127.0.0.1:6379> FT.CREATE memes SCHEMA phrase TEXT OK ``` Add variations of the phrase *hello world*. ``` 127.0.0.1:6379> HSET s1 phrase "hello world" (integer) 1 127.0.0.1:6379> HSET s2 phrase "hello simple world" (integer) 1 127.0.0.1:6379> HSET s3 phrase "hello somewhat less simple world" (integer) 1 127.0.0.1:6379> HSET s4 phrase "hello complicated yet encouraging problem solving world" (integer) 1 127.0.0.1:6379> HSET s5 phrase "hello complicated yet amazingly encouraging problem solving world" (integer) 1 ``` Then, search for the phrase *hello world*. The result returns all documents that contain the phrase. ``` 127.0.0.1:6379> FT.SEARCH memes '@phrase:(hello world)' NOCONTENT 1) (integer) 5 2) "s1" 3) "s2" 4) "s3" 5) "s4" 6) "s5" ``` Now, return all documents that have one of fewer words between *hello* and *world*. ``` 127.0.0.1:6379> FT.SEARCH memes '@phrase:(hello world)' NOCONTENT SLOP 1 1) (integer) 2 2) "s1" 3) "s2" ``` Now, return all documents with three or fewer words between *hello* and *world*. ``` 127.0.0.1:6379> FT.SEARCH memes '@phrase:(hello world)' NOCONTENT SLOP 3 1) (integer) 3 2) "s1" 3) "s2" 4) "s3" ``` `s5` needs a higher `SLOP` to match, `SLOP 6` or higher, to be exact. See what happens when you set `SLOP` to `5`. ``` 127.0.0.1:6379> FT.SEARCH memes '@phrase:(hello world)' NOCONTENT SLOP 5 1) (integer) 4 2) "s1" 3) "s2" 4) "s3" 5) "s4" ``` If you add additional terms (and stemming), you get these results. ``` 127.0.0.1:6379> FT.SEARCH memes '@phrase:(hello amazing world)' NOCONTENT 1) (integer) 1 2) "s5" ``` ``` 127.0.0.1:6379> FT.SEARCH memes '@phrase:(hello encouraged world)' NOCONTENT SLOP 5 1) (integer) 2 2) "s4" 3) "s5" ``` ``` 127.0.0.1:6379> FT.SEARCH memes '@phrase:(hello encouraged world)' NOCONTENT SLOP 4 1) (integer) 1 2) "s4" ``` If you swap the terms, you can still retrieve the correct phrase. ``` 127.0.0.1:6379> FT.SEARCH memes '@phrase:(amazing hello world)' NOCONTENT 1) (integer) 1 2) "s5" ``` But, if you use `INORDER`, you get zero results. ``` 127.0.0.1:6379> FT.SEARCH memes '@phrase:(amazing hello world)' NOCONTENT INORDER 1) (integer) 0 ``` Likewise, if you use a query attribute `$inorder` set to `true`, `s5` is not retrieved. ``` 127.0.0.1:6379> FT.SEARCH memes '@phrase:(amazing hello world)=>{$inorder: true;}' NOCONTENT 1) (integer) 0 ``` To sum up, the `INORDER` argument or `$inorder` query attribute require the query terms to match terms with similar ordering. See also -------- [`FT.CREATE`](../ft.create) | [`FT.AGGREGATE`](../ft.aggregate) Related topics -------------- * [Extensions](https://redis.io/docs/stack/search/reference/extensions) * [Highlighting](https://redis.io/docs/stack/search/reference/highlight) * [Query syntax](https://redis.io/docs/stack/search/reference/query_syntax) * [RediSearch](https://redis.io/docs/stack/search) History ------- * Starting with Redis version 2.0.0: Deprecated `WITHPAYLOADS` and `PAYLOAD` arguments
programming_docs
redis CMS.MERGE CMS.MERGE ========= ``` CMS.MERGE ``` Syntax ``` CMS.MERGE destination numKeys source [source ...] [WEIGHTS weight [weight ...]] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(n) where n is the number of sketches Merges several sketches into one sketch. All sketches must have identical width and depth. Weights can be used to multiply certain sketches. Default weight is 1. ### Parameters: * **dest**: The name of destination sketch. Must be initialized. * **numKeys**: Number of sketches to be merged. * **src**: Names of source sketches to be merged. * **weight**: Multiple of each sketch. Default =1. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) - `OK` if executed correctly, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) otherwise. Examples -------- ``` redis> CMS.MERGE dest 2 test1 test2 WEIGHTS 1 3 OK ``` redis XGROUP XGROUP ====== ``` XGROUP DESTROY ``` Syntax ``` XGROUP DESTROY key group ``` Available since: 5.0.0 Time complexity: O(N) where N is the number of entries in the group's pending entries list (PEL). ACL categories: `@write`, `@stream`, `@slow`, The `XGROUP DESTROY` command completely destroys a consumer group. The consumer group will be destroyed even if there are active consumers, and pending messages, so make sure to call this command only when really needed. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of destroyed consumer groups (0 or 1) redis MODULE MODULE ====== ``` MODULE LIST ``` Syntax ``` MODULE LIST ``` Available since: 4.0.0 Time complexity: O(N) where N is the number of loaded modules. ACL categories: `@admin`, `@slow`, `@dangerous`, Returns information about the modules loaded to the server. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of loaded modules. Each element in the list represents a module, and is in itself a list of property names and their values. The following properties is reported for each loaded module: * `name`: Name of the module * `ver`: Version of the module redis TYPE TYPE ==== ``` TYPE ``` Syntax ``` TYPE key ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@keyspace`, `@read`, `@fast`, Returns the string representation of the type of the value stored at `key`. The different types that can be returned are: `string`, `list`, `set`, `zset`, `hash` and `stream`. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): type of `key`, or `none` when `key` does not exist. Examples -------- ``` SET key1 "value" LPUSH key2 "value" SADD key3 "value" TYPE key1 TYPE key2 TYPE key3 ``` redis SETEX SETEX ===== ``` SETEX (deprecated) ``` As of Redis version 2.6.12, this command is regarded as deprecated. It can be replaced by [`SET`](../set) with the `EX` argument when migrating or writing new code. Syntax ``` SETEX key seconds value ``` Available since: 2.0.0 Time complexity: O(1) ACL categories: `@write`, `@string`, `@slow`, Set `key` to hold the string `value` and set `key` to timeout after a given number of seconds. This command is equivalent to: ``` SET key value EX seconds ``` An error is returned when `seconds` is invalid. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) Examples -------- ``` SETEX mykey 10 "Hello" TTL mykey GET mykey ``` See also -------- [`TTL`](../ttl) redis MIGRATE MIGRATE ======= ``` MIGRATE ``` Syntax ``` MIGRATE host port <key | ""> destination-db timeout [COPY] [REPLACE] [AUTH password | AUTH2 username password] [KEYS key [key ...]] ``` Available since: 2.6.0 Time complexity: This command actually executes a DUMP+DEL in the source instance, and a RESTORE in the target instance. See the pages of these commands for time complexity. Also an O(N) data transfer between the two instances is performed. ACL categories: `@keyspace`, `@write`, `@slow`, `@dangerous`, Atomically transfer a key from a source Redis instance to a destination Redis instance. On success the key is deleted from the original instance and is guaranteed to exist in the target instance. The command is atomic and blocks the two instances for the time required to transfer the key, at any given time the key will appear to exist in a given instance or in the other instance, unless a timeout error occurs. In 3.2 and above, multiple keys can be pipelined in a single call to `MIGRATE` by passing the empty string ("") as key and adding the `KEYS` clause. The command internally uses [`DUMP`](../dump) to generate the serialized version of the key value, and [`RESTORE`](../restore) in order to synthesize the key in the target instance. The source instance acts as a client for the target instance. If the target instance returns OK to the [`RESTORE`](../restore) command, the source instance deletes the key using [`DEL`](../del). The timeout specifies the maximum idle time in any moment of the communication with the destination instance in milliseconds. This means that the operation does not need to be completed within the specified amount of milliseconds, but that the transfer should make progresses without blocking for more than the specified amount of milliseconds. `MIGRATE` needs to perform I/O operations and to honor the specified timeout. When there is an I/O error during the transfer or if the timeout is reached the operation is aborted and the special error - `IOERR` returned. When this happens the following two cases are possible: * The key may be on both the instances. * The key may be only in the source instance. It is not possible for the key to get lost in the event of a timeout, but the client calling `MIGRATE`, in the event of a timeout error, should check if the key is *also* present in the target instance and act accordingly. When any other error is returned (starting with `ERR`) `MIGRATE` guarantees that the key is still only present in the originating instance (unless a key with the same name was also *already* present on the target instance). If there are no keys to migrate in the source instance `NOKEY` is returned. Because missing keys are possible in normal conditions, from expiry for example, `NOKEY` isn't an error. Migrating multiple keys with a single command call -------------------------------------------------- Starting with Redis 3.0.6 `MIGRATE` supports a new bulk-migration mode that uses pipelining in order to migrate multiple keys between instances without incurring in the round trip time latency and other overheads that there are when moving each key with a single `MIGRATE` call. In order to enable this form, the `KEYS` option is used, and the normal *key* argument is set to an empty string. The actual key names will be provided after the `KEYS` argument itself, like in the following example: ``` MIGRATE 192.168.1.34 6379 "" 0 5000 KEYS key1 key2 key3 ``` When this form is used the `NOKEY` status code is only returned when none of the keys is present in the instance, otherwise the command is executed, even if just a single key exists. Options ------- * `COPY` -- Do not remove the key from the local instance. * `REPLACE` -- Replace existing key on the remote instance. * `KEYS` -- If the key argument is an empty string, the command will instead migrate all the keys that follow the `KEYS` option (see the above section for more info). * `AUTH` -- Authenticate with the given password to the remote instance. * `AUTH2` -- Authenticate with the given username and password pair (Redis 6 or greater ACL auth style). Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): The command returns OK on success, or `NOKEY` if no keys were found in the source instance. History ------- * Starting with Redis version 3.0.0: Added the `COPY` and `REPLACE` options. * Starting with Redis version 3.0.6: Added the `KEYS` option. * Starting with Redis version 4.0.7: Added the `AUTH` option. * Starting with Redis version 6.0.0: Added the `AUTH2` option. redis JSON.SET JSON.SET ======== ``` JSON.SET ``` Syntax ``` JSON.SET key path value [NX | XX] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: O(M+N) when path is evaluated to a single value where M is the size of the original value (if it exists) and N is the size of the new value, O(M+N) when path is evaluated to multiple values where M is the size of the key and N is the size of the new value Set the JSON value at `path` in `key` [Examples](#examples) Required arguments ------------------ `key` is key to modify. `value` is value to set at the specified path Optional arguments ------------------ `path` is JSONPath to specify. Default is root `$`. For new Redis keys the `path` must be the root. For existing keys, when the entire `path` exists, the value that it contains is replaced with the `json` value. For existing keys, when the `path` exists, except for the last element, a new child is added with the `json` value. Adds a key (with its respective value) to a JSON Object (in a RedisJSON data type key) only if it is the last child in the `path`, or it is the parent of a new child being added in the `path`. Optional arguments `NX` and `XX` modify this behavior for both new RedisJSON data type keys as well as the JSON Object keys in them. `NX` sets the key only if it does not already exist. `XX` sets the key only if it already exists. Return value ------------ JSET.SET returns a simple string reply: `OK` if executed correctly or `nil` if the specified `NX` or `XX` conditions were not met. For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Examples -------- **Replace an existing value** ``` 127.0.0.1:6379> JSON.SET doc $ '{"a":2}' OK 127.0.0.1:6379> JSON.SET doc $.a '3' OK 127.0.0.1:6379> JSON.GET doc $ "[{\"a\":3}]" ``` **Add a new value** ``` 127.0.0.1:6379> JSON.SET doc $ '{"a":2}' OK 127.0.0.1:6379> JSON.SET doc $.b '8' OK 127.0.0.1:6379> JSON.GET doc $ "[{\"a\":2,\"b\":8}]" ``` **Update multi-paths** ``` 127.0.0.1:6379> JSON.SET doc $ '{"f1": {"a":1}, "f2":{"a":2}}' OK 127.0.0.1:6379> JSON.SET doc $..a 3 OK 127.0.0.1:6379> JSON.GET doc "{\"f1\":{\"a\":3},\"f2\":{\"a\":3}}" ``` See also -------- [`JSON.GET`](../json.get) | [`JSON.MGET`](../json.mget) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json) redis MODULE MODULE ====== ``` MODULE UNLOAD ``` Syntax ``` MODULE UNLOAD name ``` Available since: 4.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, Unloads a module. This command unloads the module specified by `name`. Note that the module's name is reported by the [`MODULE LIST`](../module-list) command, and may differ from the dynamic library's filename. Known limitations: * Modules that register custom data types can not be unloaded. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if module was unloaded. redis SETBIT SETBIT ====== ``` SETBIT ``` Syntax ``` SETBIT key offset value ``` Available since: 2.2.0 Time complexity: O(1) ACL categories: `@write`, `@bitmap`, `@slow`, Sets or clears the bit at *offset* in the string value stored at *key*. The bit is either set or cleared depending on *value*, which can be either 0 or 1. When *key* does not exist, a new string value is created. The string is grown to make sure it can hold a bit at *offset*. The *offset* argument is required to be greater than or equal to 0, and smaller than 2^32 (this limits bitmaps to 512MB). When the string at *key* is grown, added bits are set to 0. **Warning**: When setting the last possible bit (*offset* equal to 2^32 -1) and the string value stored at *key* does not yet hold a string value, or holds a small string value, Redis needs to allocate all intermediate memory which can block the server for some time. On a 2010 MacBook Pro, setting bit number 2^32 -1 (512MB allocation) takes ~300ms, setting bit number 2^30 -1 (128MB allocation) takes ~80ms, setting bit number 2^28 -1 (32MB allocation) takes ~30ms and setting bit number 2^26 -1 (8MB allocation) takes ~8ms. Note that once this first allocation is done, subsequent calls to `SETBIT` for the same *key* will not have the allocation overhead. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the original bit value stored at *offset*. Examples -------- ``` SETBIT mykey 7 1 SETBIT mykey 7 0 GET mykey ``` Pattern: accessing the entire bitmap ------------------------------------ There are cases when you need to set all the bits of single bitmap at once, for example when initializing it to a default non-zero value. It is possible to do this with multiple calls to the `SETBIT` command, one for each bit that needs to be set. However, so as an optimization you can use a single [`SET`](../set) command to set the entire bitmap. Bitmaps are not an actual data type, but a set of bit-oriented operations defined on the String type (for more information refer to the [Bitmaps section of the Data Types Introduction page](https://redis.io/topics/data-types-intro#bitmaps)). This means that bitmaps can be used with string commands, and most importantly with [`SET`](../set) and [`GET`](../get). Because Redis' strings are binary-safe, a bitmap is trivially encoded as a bytes stream. The first byte of the string corresponds to offsets 0..7 of the bitmap, the second byte to the 8..15 range, and so forth. For example, after setting a few bits, getting the string value of the bitmap would look like this: ``` > SETBIT bitmapsarestrings 2 1 > SETBIT bitmapsarestrings 3 1 > SETBIT bitmapsarestrings 5 1 > SETBIT bitmapsarestrings 10 1 > SETBIT bitmapsarestrings 11 1 > SETBIT bitmapsarestrings 14 1 > GET bitmapsarestrings "42" ``` By getting the string representation of a bitmap, the client can then parse the response's bytes by extracting the bit values using native bit operations in its native programming language. Symmetrically, it is also possible to set an entire bitmap by performing the bits-to-bytes encoding in the client and calling [`SET`](../set) with the resultant string. Pattern: setting multiple bits ------------------------------ `SETBIT` excels at setting single bits, and can be called several times when multiple bits need to be set. To optimize this operation you can replace multiple `SETBIT` calls with a single call to the variadic [`BITFIELD`](../bitfield) command and the use of fields of type `u1`. For example, the example above could be replaced by: ``` > BITFIELD bitsinabitmap SET u1 2 1 SET u1 3 1 SET u1 5 1 SET u1 10 1 SET u1 11 1 SET u1 14 1 ``` Advanced Pattern: accessing bitmap ranges ----------------------------------------- It is also possible to use the [`GETRANGE`](../getrange) and [`SETRANGE`](../setrange) string commands to efficiently access a range of bit offsets in a bitmap. Below is a sample implementation in idiomatic Redis Lua scripting that can be run with the [`EVAL`](../eval) command: ``` --[[ Sets a bitmap range Bitmaps are stored as Strings in Redis. A range spans one or more bytes, so we can call [`SETRANGE`](/commands/setrange) when entire bytes need to be set instead of flipping individual bits. Also, to avoid multiple internal memory allocations in Redis, we traverse in reverse. Expected input: KEYS[1] - bitfield key ARGV[1] - start offset (0-based, inclusive) ARGV[2] - end offset (same, should be bigger than start, no error checking) ARGV[3] - value (should be 0 or 1, no error checking) ]]-- -- A helper function to stringify a binary string to semi-binary format local function tobits(str) local r = '' for i = 1, string.len(str) do local c = string.byte(str, i) local b = ' ' for j = 0, 7 do b = tostring(bit.band(c, 1)) .. b c = bit.rshift(c, 1) end r = r .. b end return r end -- Main local k = KEYS[1] local s, e, v = tonumber(ARGV[1]), tonumber(ARGV[2]), tonumber(ARGV[3]) -- First treat the dangling bits in the last byte local ms, me = s % 8, (e + 1) % 8 if me > 0 then local t = math.max(e - me + 1, s) for i = e, t, -1 do redis.call('SETBIT', k, i, v) end e = t end -- Then the danglings in the first byte if ms > 0 then local t = math.min(s - ms + 7, e) for i = s, t, 1 do redis.call('SETBIT', k, i, v) end s = t + 1 end -- Set a range accordingly, if at all local rs, re = s / 8, (e + 1) / 8 local rl = re - rs if rl > 0 then local b = '\255' if 0 == v then b = '\0' end redis.call('SETRANGE', k, rs, string.rep(b, rl)) end ``` **Note:** the implementation for getting a range of bit offsets from a bitmap is left as an exercise to the reader. redis TS.MADD TS.MADD ======= ``` TS.MADD ``` Syntax ``` TS.MADD {key timestamp value}... ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [TimeSeries 1.0.0](https://redis.io/docs/stack/timeseries) Time complexity: O(N\*M) when N is the amount of series updated and M is the amount of compaction rules or O(N) with no compaction Append new samples to one or more time series [Examples](#examples) Required arguments ------------------ `key` is the key name for the time series. `timestamp` is (integer) UNIX sample timestamp in milliseconds or `*` to set the timestamp according to the server clock. `value` is numeric data value of the sample (double). The double number should follow [RFC 7159](https://tools.ietf.org/html/rfc7159) (a JSON standard). The parser rejects overly large values that would not fit in binary64. It does not accept NaN or infinite values. **Notes:** * If `timestamp` is older than the retention period compared to the maximum existing timestamp, the sample is discarded and an error is returned. * Explicitly adding samples to a compacted time series (using [`TS.ADD`](../ts.add), `TS.MADD`, [`TS.INCRBY`](../ts.incrby), or [`TS.DECRBY`](../ts.decrby)) may result in inconsistencies between the raw and the compacted data. The compaction process may override such samples. Return value ------------ Either * [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) representing the timestamp of each added sample or an [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) (e.g., on `DUPLICATE_POLICY` violation) * [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) (e.g., on wrong number of arguments) Complexity ---------- If a compaction rule exits on a time series, TS.MADD performance might be reduced. The complexity of TS.MADD is always `O(N*M)`, where `N` is the amount of series updated and `M` is the amount of compaction rules or `O(N)` with no compaction. Examples -------- **Add stock prices at different timestamps** Create two stocks and add their prices at three different timestamps. ``` 127.0.0.1:6379> TS.CREATE stock:A LABELS type stock name A OK 127.0.0.1:6379> TS.CREATE stock:B LABELS type stock name B OK 127.0.0.1:6379> TS.MADD stock:A 1000 100 stock:A 1010 110 stock:A 1020 120 stock:B 1000 120 stock:B 1010 110 stock:B 1020 100 1) (integer) 1000 2) (integer) 1010 3) (integer) 1020 4) (integer) 1000 5) (integer) 1010 6) (integer) 1020 ``` See also -------- [`TS.MRANGE`](../ts.mrange) | [`TS.RANGE`](../ts.range) | [`TS.MREVRANGE`](../ts.mrevrange) | [`TS.REVRANGE`](../ts.revrange) Related topics -------------- [RedisTimeSeries](https://redis.io/docs/stack/timeseries)
programming_docs
redis FT.SYNDUMP FT.SYNDUMP ========== ``` FT.SYNDUMP ``` Syntax ``` FT.SYNDUMP index ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.2.0](https://redis.io/docs/stack/search) Time complexity: O(1) Dump the contents of a synonym group [Examples](#examples) Required arguments ------------------ `index` is index name. Use FT.SYNDUMP to dump the synonyms data structure. This command returns a list of synonym terms and their synonym group ids. Return ------ FT.SYNDUMP returns an array reply, with a pair of `term` and an array of synonym groups. Examples -------- **Return the contents of a synonym group** ``` 127.0.0.1:6379> FT.SYNDUMP idx 1) "shalom" 2) 1) "synonym1" 2) "synonym2" 3) "hi" 4) 1) "synonym1" 5) "hello" 6) 1) "synonym1" ``` See also -------- [`FT.SYNUPDATE`](../ft.synupdate) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) redis XREADGROUP XREADGROUP ========== ``` XREADGROUP ``` Syntax ``` XREADGROUP GROUP group consumer [COUNT count] [BLOCK milliseconds] [NOACK] STREAMS key [key ...] id [id ...] ``` Available since: 5.0.0 Time complexity: For each stream mentioned: O(M) with M being the number of elements returned. If M is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(1). On the other side when XREADGROUP blocks, XADD will pay the O(N) time in order to serve the N clients blocked on the stream getting new data. ACL categories: `@write`, `@stream`, `@slow`, `@blocking`, The `XREADGROUP` command is a special version of the [`XREAD`](../xread) command with support for consumer groups. Probably you will have to understand the [`XREAD`](../xread) command before reading this page will makes sense. Moreover, if you are new to streams, we recommend to read our [introduction to Redis Streams](https://redis.io/topics/streams-intro). Make sure to understand the concept of consumer group in the introduction so that following how this command works will be simpler. Consumer groups in 30 seconds ----------------------------- The difference between this command and the vanilla [`XREAD`](../xread) is that this one supports consumer groups. Without consumer groups, just using [`XREAD`](../xread), all the clients are served with all the entries arriving in a stream. Instead using consumer groups with `XREADGROUP`, it is possible to create groups of clients that consume different parts of the messages arriving in a given stream. If, for instance, the stream gets the new entries A, B, and C and there are two consumers reading via a consumer group, one client will get, for instance, the messages A and C, and the other the message B, and so forth. Within a consumer group, a given consumer (that is, just a client consuming messages from the stream), has to identify with a unique *consumer name*. Which is just a string. One of the guarantees of consumer groups is that a given consumer can only see the history of messages that were delivered to it, so a message has just a single owner. However there is a special feature called *message claiming* that allows other consumers to claim messages in case there is a non recoverable failure of some consumer. In order to implement such semantics, consumer groups require explicit acknowledgment of the messages successfully processed by the consumer, via the [`XACK`](../xack) command. This is needed because the stream will track, for each consumer group, who is processing what message. This is how to understand if you want to use a consumer group or not: 1. If you have a stream and multiple clients, and you want all the clients to get all the messages, you do not need a consumer group. 2. If you have a stream and multiple clients, and you want the stream to be *partitioned* or *sharded* across your clients, so that each client will get a sub set of the messages arriving in a stream, you need a consumer group. Differences between XREAD and XREADGROUP ---------------------------------------- From the point of view of the syntax, the commands are almost the same, however `XREADGROUP` *requires* a special and mandatory option: ``` GROUP <group-name> <consumer-name> ``` The group name is just the name of a consumer group associated to the stream. The group is created using the [`XGROUP`](../xgroup) command. The consumer name is the string that is used by the client to identify itself inside the group. The consumer is auto created inside the consumer group the first time it is saw. Different clients should select a different consumer name. When you read with `XREADGROUP`, the server will *remember* that a given message was delivered to you: the message will be stored inside the consumer group in what is called a Pending Entries List (PEL), that is a list of message IDs delivered but not yet acknowledged. The client will have to acknowledge the message processing using [`XACK`](../xack) in order for the pending entry to be removed from the PEL. The PEL can be inspected using the [`XPENDING`](../xpending) command. The `NOACK` subcommand can be used to avoid adding the message to the PEL in cases where reliability is not a requirement and the occasional message loss is acceptable. This is equivalent to acknowledging the message when it is read. The ID to specify in the **STREAMS** option when using `XREADGROUP` can be one of the following two: * The special `>` ID, which means that the consumer want to receive only messages that were *never delivered to any other consumer*. It just means, give me new messages. * Any other ID, that is, 0 or any other valid ID or incomplete ID (just the millisecond time part), will have the effect of returning entries that are pending for the consumer sending the command with IDs greater than the one provided. So basically if the ID is not `>`, then the command will just let the client access its pending entries: messages delivered to it, but not yet acknowledged. Note that in this case, both `BLOCK` and `NOACK` are ignored. Like [`XREAD`](../xread) the `XREADGROUP` command can be used in a blocking way. There are no differences in this regard. What happens when a message is delivered to a consumer? ------------------------------------------------------- Two things: 1. If the message was never delivered to anyone, that is, if we are talking about a new message, then a PEL (Pending Entries List) is created. 2. If instead the message was already delivered to this consumer, and it is just re-fetching the same message again, then the *last delivery counter* is updated to the current time, and the *number of deliveries* is incremented by one. You can access those message properties using the [`XPENDING`](../xpending) command. Usage example ------------- Normally you use the command like that in order to get new messages and process them. In pseudo-code: ``` WHILE true entries = XREADGROUP GROUP $GroupName $ConsumerName BLOCK 2000 COUNT 10 STREAMS mystream > if entries == nil puts "Timeout... try again" CONTINUE end FOREACH entries AS stream_entries FOREACH stream_entries as message process_message(message.id,message.fields) # ACK the message as processed XACK mystream $GroupName message.id END END END ``` In this way the example consumer code will fetch only new messages, process them, and acknowledge them via [`XACK`](../xack). However the example code above is not complete, because it does not handle recovering after a crash. What will happen if we crash in the middle of processing messages, is that our messages will remain in the pending entries list, so we can access our history by giving `XREADGROUP` initially an ID of 0, and performing the same loop. Once providing an ID of 0 the reply is an empty set of messages, we know that we processed and acknowledged all the pending messages: we can start to use `>` as ID, in order to get the new messages and rejoin the consumers that are processing new things. To see how the command actually replies, please check the [`XREAD`](../xread) command page. What happens when a pending message is deleted? ----------------------------------------------- Entries may be deleted from the stream due to trimming or explicit calls to [`XDEL`](../xdel) at any time. By design, Redis doesn't prevent the deletion of entries that are present in the stream's PELs. When this happens, the PELs retain the deleted entries' IDs, but the actual entry payload is no longer available. Therefore, when reading such PEL entries, Redis will return a null value in place of their respective data. Example: ``` > XADD mystream 1 myfield mydata "1-0" > XGROUP CREATE mystream mygroup 0 OK > XREADGROUP GROUP mygroup myconsumer STREAMS mystream > 1) 1) "mystream" 2) 1) 1) "1-0" 2) 1) "myfield" 2) "mydata" > XDEL mystream 1-0 (integer) 1 > XREADGROUP GROUP mygroup myconsumer STREAMS mystream 0 1) 1) "mystream" 2) 1) 1) "1-0" 2) (nil) ``` Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays), specifically: The command returns an array of results: each element of the returned array is an array composed of a two element containing the key name and the entries reported for that key. The entries reported are full stream entries, having IDs and the list of all the fields and values. Field and values are guaranteed to be reported in the same order they were added by [`XADD`](../xadd). When **BLOCK** is used, on timeout a null reply is returned. Reading the [Redis Streams introduction](https://redis.io/topics/streams-intro) is highly suggested in order to understand more about the streams overall behavior and semantics. redis FT.SUGGET FT.SUGGET ========= ``` FT.SUGGET ``` Syntax ``` FT.SUGGET key prefix [FUZZY] [WITHSCORES] [WITHPAYLOADS] [MAX max] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.0.0](https://redis.io/docs/stack/search) Time complexity: O(1) Get completion suggestions for a prefix Syntax ------ [Examples](#examples) Required arguments ------------------ `key` is suggestion dictionary key. `prefix` is prefix to complete on. Optional arguments ------------------ `FUZZY` performs a fuzzy prefix search, including prefixes at Levenshtein distance of 1 from the prefix sent. `MAX num` limits the results to a maximum of `num` (default: 5). `WITHSCORES` also returns the score of each suggestion. This can be used to merge results from multiple instances. `WITHPAYLOADS` returns optional payloads saved along with the suggestions. If no payload is present for an entry, it returns a null reply. Return ------ FT.SUGGET returns an array reply, which is a list of the top suggestions matching the prefix, optionally with score after each entry. Examples -------- **Get completion suggestions for a prefix** ``` 127.0.0.1:6379> FT.SUGGET sug hell FUZZY MAX 3 WITHSCORES 1) "hell" 2) "2147483648" 3) "hello" 4) "0.70710676908493042" ``` See also -------- [`FT.SUGADD`](../ft.sugadd) | [`FT.SUGDEL`](../ft.sugdel) | [`FT.SUGLEN`](../ft.suglen) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) History ------- * Starting with Redis version 2.0.0: Deprecated `WITHPAYLOADS` argument redis SORT_RO SORT\_RO ======== ``` SORT_RO ``` Syntax ``` SORT_RO key [BY pattern] [LIMIT offset count] [GET pattern [GET pattern ...]] [ASC | DESC] [ALPHA] ``` Available since: 7.0.0 Time complexity: O(N+M\*log(M)) where N is the number of elements in the list or set to sort, and M the number of returned elements. When the elements are not sorted, complexity is O(N). ACL categories: `@read`, `@set`, `@sortedset`, `@list`, `@slow`, `@dangerous`, Read-only variant of the [`SORT`](../sort) command. It is exactly like the original [`SORT`](../sort) but refuses the `STORE` option and can safely be used in read-only replicas. Since the original [`SORT`](../sort) has a `STORE` option it is technically flagged as a writing command in the Redis command table. For this reason read-only replicas in a Redis Cluster will redirect it to the master instance even if the connection is in read-only mode (see the [`READONLY`](../readonly) command of Redis Cluster). The `SORT_RO` variant was introduced in order to allow [`SORT`](../sort) behavior in read-only replicas without breaking compatibility on command flags. See original [`SORT`](../sort) for more details. Examples -------- ``` SORT_RO mylist BY weight_*->fieldname GET object_*->fieldname ``` Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a list of sorted elements. redis ACL ACL === ``` ACL USERS ``` Syntax ``` ACL USERS ``` Available since: 6.0.0 Time complexity: O(N). Where N is the number of configured users. ACL categories: `@admin`, `@slow`, `@dangerous`, The command shows a list of all the usernames of the currently configured users in the Redis ACL system. Return ------ An array of strings. Examples -------- ``` > ACL USERS 1) "anna" 2) "antirez" 3) "default" ``` redis SCARD SCARD ===== ``` SCARD ``` Syntax ``` SCARD key ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@read`, `@set`, `@fast`, Returns the set cardinality (number of elements) of the set stored at `key`. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the cardinality (number of elements) of the set, or `0` if `key` does not exist. Examples -------- ``` SADD myset "Hello" SADD myset "World" SCARD myset ``` redis JSON.ARRLEN JSON.ARRLEN =========== ``` JSON.ARRLEN ``` Syntax ``` JSON.ARRLEN key [path] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: O(1) where path is evaluated to a single value, O(N) where path is evaluated to multiple values, where N is the size of the key Report the length of the JSON array at `path` in `key` [Examples](#examples) Required arguments ------------------ `key` is key to parse. Optional arguments ------------------ `path` is JSONPath to specify. Default is root `$`, if not provided. Returns null if the `key` or `path` do not exist. Return ------ `JSON.ARRLEN` returns an [array](https://redis.io/docs/reference/protocol-spec/#resp-arrays) of integer replies, an integer for each matching value, each is the array's length, or `nil`, if the matching value is not an array. For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Examples -------- **Get lengths of JSON arrays in a document** Create a document for wireless earbuds. ``` 127.0.0.1:6379> JSON.SET item:2 $ '{"name":"Wireless earbuds","description":"Wireless Bluetooth in-ear headphones","connection":{"wireless":true,"type":"Bluetooth"},"price":64.99,"stock":17,"colors":["black","white"], "max\_level":[80, 100, 120]}' OK ``` Find lengths of arrays in all objects of the document. ``` 127.0.0.1:6379> JSON.ARRLEN item:2 '$.[\*]' 1) (nil) 2) (nil) 3) (nil) 4) (nil) 5) (nil) 6) (integer) 2 7) (integer) 3 ``` Return the length of the `max_level` array. ``` 127.0.0.1:6379> JSON.ARRLEN item:2 '$..max\_level' 1) (integer) 3 ``` Get all the maximum level values. ``` 127.0.0.1:6379> JSON.GET item:2 '$..max\_level' "[[80,100,120]]" ``` See also -------- [`JSON.ARRINDEX`](../json.arrindex) | [`JSON.ARRINSERT`](../json.arrinsert) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json) redis CLUSTER CLUSTER ======= ``` CLUSTER FLUSHSLOTS ``` Syntax ``` CLUSTER FLUSHSLOTS ``` Available since: 3.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, Deletes all slots from a node. The `CLUSTER FLUSHSLOTS` deletes all information about slots from the connected node. It can only be called when the database is empty. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` redis XPENDING XPENDING ======== ``` XPENDING ``` Syntax ``` XPENDING key group [[IDLE min-idle-time] start end count [consumer]] ``` Available since: 5.0.0 Time complexity: O(N) with N being the number of elements returned, so asking for a small fixed number of entries per call is O(1). O(M), where M is the total number of entries scanned when used with the IDLE filter. When the command returns just the summary and the list of consumers is small, it runs in O(1) time; otherwise, an additional O(N) time for iterating every consumer. ACL categories: `@read`, `@stream`, `@slow`, Fetching data from a stream via a consumer group, and not acknowledging such data, has the effect of creating *pending entries*. This is well explained in the [`XREADGROUP`](../xreadgroup) command, and even better in our [introduction to Redis Streams](https://redis.io/topics/streams-intro). The [`XACK`](../xack) command will immediately remove the pending entry from the Pending Entries List (PEL) since once a message is successfully processed, there is no longer need for the consumer group to track it and to remember the current owner of the message. The `XPENDING` command is the interface to inspect the list of pending messages, and is as thus a very important command in order to observe and understand what is happening with a streams consumer groups: what clients are active, what messages are pending to be consumed, or to see if there are idle messages. Moreover this command, together with [`XCLAIM`](../xclaim) is used in order to implement recovering of consumers that are failing for a long time, and as a result certain messages are not processed: a different consumer can claim the message and continue. This is better explained in the [streams intro](https://redis.io/topics/streams-intro) and in the [`XCLAIM`](../xclaim) command page, and is not covered here. Summary form of XPENDING ------------------------ When `XPENDING` is called with just a key name and a consumer group name, it just outputs a summary about the pending messages in a given consumer group. In the following example, we create a consumer group and immediately create a pending message by reading from the group with [`XREADGROUP`](../xreadgroup). ``` > XGROUP CREATE mystream group55 0-0 OK > XREADGROUP GROUP group55 consumer-123 COUNT 1 STREAMS mystream > 1) 1) "mystream" 2) 1) 1) 1526984818136-0 2) 1) "duration" 2) "1532" 3) "event-id" 4) "5" 5) "user-id" 6) "7782813" ``` We expect the pending entries list for the consumer group `group55` to have a message right now: consumer named `consumer-123` fetched the message without acknowledging its processing. The simple `XPENDING` form will give us this information: ``` > XPENDING mystream group55 1) (integer) 1 2) 1526984818136-0 3) 1526984818136-0 4) 1) 1) "consumer-123" 2) "1" ``` In this form, the command outputs the total number of pending messages for this consumer group, which is one, followed by the smallest and greatest ID among the pending messages, and then list every consumer in the consumer group with at least one pending message, and the number of pending messages it has. Extended form of XPENDING ------------------------- The summary provides a good overview, but sometimes we are interested in the details. In order to see all the pending messages with more associated information we need to also pass a range of IDs, in a similar way we do it with [`XRANGE`](../xrange), and a non optional *count* argument, to limit the number of messages returned per call: ``` > XPENDING mystream group55 - + 10 1) 1) 1526984818136-0 2) "consumer-123" 3) (integer) 196415 4) (integer) 1 ``` In the extended form we no longer see the summary information, instead there is detailed information for each message in the pending entries list. For each message four attributes are returned: 1. The ID of the message. 2. The name of the consumer that fetched the message and has still to acknowledge it. We call it the current *owner* of the message. 3. The number of milliseconds that elapsed since the last time this message was delivered to this consumer. 4. The number of times this message was delivered. The deliveries counter, that is the fourth element in the array, is incremented when some other consumer *claims* the message with [`XCLAIM`](../xclaim), or when the message is delivered again via [`XREADGROUP`](../xreadgroup), when accessing the history of a consumer in a consumer group (see the [`XREADGROUP`](../xreadgroup) page for more info). It is possible to pass an additional argument to the command, in order to see the messages having a specific owner: ``` > XPENDING mystream group55 - + 10 consumer-123 ``` But in the above case the output would be the same, since we have pending messages only for a single consumer. However what is important to keep in mind is that this operation, filtering by a specific consumer, is not inefficient even when there are many pending messages from many consumers: we have a pending entries list data structure both globally, and for every consumer, so we can very efficiently show just messages pending for a single consumer. Idle time filter ---------------- It is also possible to filter pending stream entries by their idle-time, given in milliseconds (useful for [`XCLAIM`](../xclaim)ing entries that have not been processed for some time): ``` > XPENDING mystream group55 IDLE 9000 - + 10 > XPENDING mystream group55 IDLE 9000 - + 10 consumer-123 ``` The first case will return the first 10 (or less) PEL entries of the entire group that are idle for over 9 seconds, whereas in the second case only those of `consumer-123`. Exclusive ranges and iterating the PEL -------------------------------------- The `XPENDING` command allows iterating over the pending entries just like [`XRANGE`](../xrange) and [`XREVRANGE`](../xrevrange) allow for the stream's entries. You can do this by prefixing the ID of the last-read pending entry with the `(` character that denotes an open (exclusive) range, and proving it to the subsequent call to the command. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays), specifically: The command returns data in different format depending on the way it is called, as previously explained in this page. However the reply is always an array of items. History ------- * Starting with Redis version 6.2.0: Added the `IDLE` option and exclusive range intervals.
programming_docs
redis DEL DEL === ``` DEL ``` Syntax ``` DEL key [key ...] ``` Available since: 1.0.0 Time complexity: O(N) where N is the number of keys that will be removed. When a key to remove holds a value other than a string, the individual complexity for this key is O(M) where M is the number of elements in the list, set, sorted set or hash. Removing a single key that holds a string value is O(1). ACL categories: `@keyspace`, `@write`, `@slow`, Removes the specified keys. A key is ignored if it does not exist. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): The number of keys that were removed. Examples -------- ``` SET key1 "Hello" SET key2 "World" DEL key1 key2 key3 ``` redis CLUSTER CLUSTER ======= ``` CLUSTER REPLICATE ``` Syntax ``` CLUSTER REPLICATE node-id ``` Available since: 3.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, The command reconfigures a node as a replica of the specified master. If the node receiving the command is an *empty master*, as a side effect of the command, the node role is changed from master to replica. Once a node is turned into the replica of another master node, there is no need to inform the other cluster nodes about the change: heartbeat packets exchanged between nodes will propagate the new configuration automatically. A replica will always accept the command, assuming that: 1. The specified node ID exists in its nodes table. 2. The specified node ID does not identify the instance we are sending the command to. 3. The specified node ID is a master. If the node receiving the command is not already a replica, but is a master, the command will only succeed, and the node will be converted into a replica, only if the following additional conditions are met: 1. The node is not serving any hash slots. 2. The node is empty, no keys are stored at all in the key space. If the command succeeds the new replica will immediately try to contact its master in order to replicate from it. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if the command was executed successfully, otherwise an error is returned. redis PEXPIRE PEXPIRE ======= ``` PEXPIRE ``` Syntax ``` PEXPIRE key milliseconds [NX | XX | GT | LT] ``` Available since: 2.6.0 Time complexity: O(1) ACL categories: `@keyspace`, `@write`, `@fast`, This command works exactly like [`EXPIRE`](../expire) but the time to live of the key is specified in milliseconds instead of seconds. Options ------- The `PEXPIRE` command supports a set of options since Redis 7.0: * `NX` -- Set expiry only when the key has no expiry * `XX` -- Set expiry only when the key has an existing expiry * `GT` -- Set expiry only when the new expiry is greater than current one * `LT` -- Set expiry only when the new expiry is less than current one A non-volatile key is treated as an infinite TTL for the purpose of `GT` and `LT`. The `GT`, `LT` and `NX` options are mutually exclusive. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: * `1` if the timeout was set. * `0` if the timeout was not set. e.g. key doesn't exist, or operation skipped due to the provided arguments. Examples -------- ``` SET mykey "Hello" PEXPIRE mykey 1500 TTL mykey PTTL mykey PEXPIRE mykey 1000 XX TTL mykey PEXPIRE mykey 1000 NX TTL mykey ``` History ------- * Starting with Redis version 7.0.0: Added options: `NX`, `XX`, `GT` and `LT`. redis GEODIST GEODIST ======= ``` GEODIST ``` Syntax ``` GEODIST key member1 member2 [M | KM | FT | MI] ``` Available since: 3.2.0 Time complexity: O(log(N)) ACL categories: `@read`, `@geo`, `@slow`, Return the distance between two members in the geospatial index represented by the sorted set. Given a sorted set representing a geospatial index, populated using the [`GEOADD`](../geoadd) command, the command returns the distance between the two specified members in the specified unit. If one or both the members are missing, the command returns NULL. The unit must be one of the following, and defaults to meters: * **m** for meters. * **km** for kilometers. * **mi** for miles. * **ft** for feet. The distance is computed assuming that the Earth is a perfect sphere, so errors up to 0.5% are possible in edge cases. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings), specifically: The command returns the distance as a double (represented as a string) in the specified unit, or NULL if one or both the elements are missing. Examples -------- ``` GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" GEODIST Sicily Palermo Catania GEODIST Sicily Palermo Catania km GEODIST Sicily Palermo Catania mi GEODIST Sicily Foo Bar ``` redis FT.SPELLCHECK FT.SPELLCHECK ============= ``` FT.SPELLCHECK ``` Syntax ``` FT.SPELLCHECK index query [DISTANCE distance] [TERMS INCLUDE | EXCLUDE dictionary [terms [terms ...]]] [DIALECT dialect] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.4.0](https://redis.io/docs/stack/search) Time complexity: O(1) Perform spelling correction on a query, returning suggestions for misspelled terms [Examples](#examples) Required arguments ------------------ `index` is index with the indexed terms. `query` is search query. See [Spellchecking](https://redis.io/docs/stack/search/reference/spellcheck) for more details. Optional arguments ------------------ `TERMS` specifies an inclusion (`INCLUDE`) or exclusion (`EXCLUDE`) of a custom dictionary named `{dict}`. Refer to [`FT.DICTADD`](../ft.dictadd), [`FT.DICTDEL`](../ft.dictdel) and [`FT.DICTDUMP`](../ft.dictdump) about managing custom dictionaries. `DISTANCE` is maximum Levenshtein distance for spelling suggestions (default: 1, max: 4). `DIALECT {dialect_version}` selects the dialect version under which to execute the query. If not specified, the query will execute under the default dialect version set during module initial loading or via [`FT.CONFIG SET`](../ft.config-set) command. Return ------ FT.SPELLCHECK returns an array reply, in which each element represents a misspelled term from the query. The misspelled terms are ordered by their order of appearance in the query. Each misspelled term, in turn, is a 3-element array consisting of the constant string `TERM`, the term itself and an array of suggestions for spelling corrections. Each element in the spelling corrections array consists of the score of the suggestion and the suggestion itself. The suggestions array, per misspelled term, is ordered in descending order by score. The score is calculated by dividing the number of documents in which the suggested term exists by the total number of documents in the index. Results can be normalized by dividing scores by the highest score. Examples -------- **Perform spelling correction on a query** ``` 127.0.0.1:6379> FT.SPELLCHECK idx held DISTANCE 2 1) 1) "TERM" 2) "held" 3) 1) 1) "0.66666666666666663" 2) "hello" 2) 1) "0.33333333333333331" 2) "help" ``` See also -------- [`FT.CONFIG SET`](../ft.config-set) | [`FT.DICTADD`](../ft.dictadd) | [`FT.DICTDEL`](../ft.dictdel) | [`FT.DICTDUMP`](../ft.dictdump) Related topics -------------- * [Spellchecking](https://redis.io/docs/stack/search/reference/spellcheck) * [RediSearch](https://redis.io/docs/stack/search) redis BF.INSERT BF.INSERT ========= ``` BF.INSERT ``` Syntax ``` BF.INSERT key [CAPACITY capacity] [ERROR error] [EXPANSION expansion] [NOCREATE] [NONSCALING] ITEMS item [item ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(k \* n), where k is the number of hash functions and n is the number of items BF.INSERT is a sugarcoated combination of BF.RESERVE and BF.ADD. It creates a new filter if the `key` does not exist using the relevant arguments (see BF.RESERVE). Next, all `ITEMS` are inserted. ### Parameters * **key**: The name of the filter * **item**: One or more items to add. The `ITEMS` keyword must precede the list of items to add. Optional parameters: * **NOCREATE**: (Optional) Indicates that the filter should not be created if it does not already exist. If the filter does not yet exist, an error is returned rather than creating it automatically. This may be used where a strict separation between filter creation and filter addition is desired. It is an error to specify `NOCREATE` together with either `CAPACITY` or `ERROR`. * **capacity**: (Optional) Specifies the desired `capacity` for the filter to be created. This parameter is ignored if the filter already exists. If the filter is automatically created and this parameter is absent, then the module-level `capacity` is used. See [`BF.RESERVE`](../bf.reserve) for more information about the impact of this value. * **error**: (Optional) Specifies the `error` ratio of the newly created filter if it does not yet exist. If the filter is automatically created and `error` is not specified then the module-level error rate is used. See [`BF.RESERVE`](../bf.reserve) for more information about the format of this value. * **NONSCALING**: Prevents the filter from creating additional sub-filters if initial capacity is reached. Non-scaling filters require slightly less memory than their scaling counterparts. The filter returns an error when `capacity` is reached. * **expansion**: When `capacity` is reached, an additional sub-filter is created. The size of the new sub-filter is the size of the last sub-filter multiplied by `expansion`. If the number of elements to be stored in the filter is unknown, we recommend that you use an `expansion` of 2 or more to reduce the number of sub-filters. Otherwise, we recommend that you use an `expansion` of 1 to reduce memory consumption. The default expansion value is 2. Return ------ An array of booleans (integers). Each element is either true or false depending on whether the corresponding input element was newly added to the filter or may have previously existed. Examples -------- Add three items to a filter with default parameters if the filter does not already exist: ``` BF.INSERT filter ITEMS foo bar baz ``` Add one item to a filter with a capacity of 10000 if the filter does not already exist: ``` BF.INSERT filter CAPACITY 10000 ITEMS hello ``` Add 2 items to a filter with an error if the filter does not already exist: ``` BF.INSERT filter NOCREATE ITEMS foo bar ``` redis HINCRBYFLOAT HINCRBYFLOAT ============ ``` HINCRBYFLOAT ``` Syntax ``` HINCRBYFLOAT key field increment ``` Available since: 2.6.0 Time complexity: O(1) ACL categories: `@write`, `@hash`, `@fast`, Increment the specified `field` of a hash stored at `key`, and representing a floating point number, by the specified `increment`. If the increment value is negative, the result is to have the hash field value **decremented** instead of incremented. If the field does not exist, it is set to `0` before performing the operation. An error is returned if one of the following conditions occur: * The field contains a value of the wrong type (not a string). * The current field content or the specified increment are not parsable as a double precision floating point number. The exact behavior of this command is identical to the one of the [`INCRBYFLOAT`](../incrbyfloat) command, please refer to the documentation of [`INCRBYFLOAT`](../incrbyfloat) for further information. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the value of `field` after the increment. Examples -------- ``` HSET mykey field 10.50 HINCRBYFLOAT mykey field 0.1 HINCRBYFLOAT mykey field -5 HSET mykey field 5.0e3 HINCRBYFLOAT mykey field 2.0e2 ``` Implementation details ---------------------- The command is always propagated in the replication link and the Append Only File as a [`HSET`](../hset) operation, so that differences in the underlying floating point math implementation will not be sources of inconsistency. redis CF.RESERVE CF.RESERVE ========== ``` CF.RESERVE ``` Syntax ``` CF.RESERVE key capacity [BUCKETSIZE bucketsize] [MAXITERATIONS maxiterations] [EXPANSION expansion] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(1) Create a Cuckoo Filter as `key` with a single sub-filter for the initial amount of `capacity` for items. Because of how Cuckoo Filters work, the filter is likely to declare itself full before `capacity` is reached and therefore fill rate will likely never reach 100%. The fill rate can be improved by using a larger `bucketsize` at the cost of a higher error rate. When the filter self-declare itself `full`, it will auto-expand by generating additional sub-filters at the cost of reduced performance and increased error rate. The new sub-filter is created with size of the previous sub-filter multiplied by `expansion`. Like bucket size, additional sub-filters grow the error rate linearly. The size of the new sub-filter is the size of the last sub-filter multiplied by `expansion`. The minimal false positive error rate is 2/255 ≈ 0.78% when bucket size of 1 is used. Larger buckets increase the error rate linearly (for example, a bucket size of 3 yields a 2.35% error rate) but improve the fill rate of the filter. `maxiterations` dictates the number of attempts to find a slot for the incoming fingerprint. Once the filter gets full, high `maxIterations` value will slow down insertions. Unused capacity in prior sub-filters is automatically used when possible. The filter can grow up to 32 times. Parameters: ----------- * **key**: The key under which the filter is found. * **capacity**: Estimated capacity for the filter. Capacity is rounded to the next `2^n` number. The filter will likely not fill up to 100% of it's capacity. Make sure to reserve extra capacity if you want to avoid expansions. Optional parameters: * **bucketsize**: Number of items in each bucket. A higher bucket size value improves the fill rate but also causes a higher error rate and slightly slower performance. The default value is 2. * **maxiterations**: Number of attempts to swap items between buckets before declaring filter as full and creating an additional filter. A low value is better for performance and a higher number is better for filter fill rate. The default value is 20. * **expansion**: When a new filter is created, its size is the size of the current filter multiplied by `expansion`. Expansion is rounded to the next `2^n` number. The default value is 1. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) - `OK` if executed correctly, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) otherwise. Examples -------- ``` redis> CF.RESERVE cf 1000 OK ``` ``` redis> CF.RESERVE cf 1000 (error) ERR item exists ``` ``` redis> CF.RESERVE cf_params 1000 BUCKETSIZE 8 MAXITERATIONS 20 EXPANSION 2 OK ``` redis GEOHASH GEOHASH ======= ``` GEOHASH ``` Syntax ``` GEOHASH key [member [member ...]] ``` Available since: 3.2.0 Time complexity: O(log(N)) for each member requested, where N is the number of elements in the sorted set. ACL categories: `@read`, `@geo`, `@slow`, Return valid [Geohash](https://en.wikipedia.org/wiki/Geohash) strings representing the position of one or more elements in a sorted set value representing a geospatial index (where elements were added using [`GEOADD`](../geoadd)). Normally Redis represents positions of elements using a variation of the Geohash technique where positions are encoded using 52 bit integers. The encoding is also different compared to the standard because the initial min and max coordinates used during the encoding and decoding process are different. This command however **returns a standard Geohash** in the form of a string as described in the [Wikipedia article](https://en.wikipedia.org/wiki/Geohash) and compatible with the [geohash.org](http://geohash.org) web site. Geohash string properties ------------------------- The command returns 11 characters Geohash strings, so no precision is lost compared to the Redis internal 52 bit representation. The returned Geohashes have the following properties: 1. They can be shortened removing characters from the right. It will lose precision but will still point to the same area. 2. It is possible to use them in `geohash.org` URLs such as `http://geohash.org/<geohash-string>`. This is an [example of such URL](http://geohash.org/sqdtr74hyu0). 3. Strings with a similar prefix are nearby, but the contrary is not true, it is possible that strings with different prefixes are nearby too. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays), specifically: The command returns an array where each element is the Geohash corresponding to each member name passed as argument to the command. Examples -------- ``` GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" GEOHASH Sicily Palermo Catania ``` redis BF.RESERVE BF.RESERVE ========== ``` BF.RESERVE ``` Syntax ``` BF.RESERVE key error_rate capacity [EXPANSION expansion] [NONSCALING] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(1) Creates an empty Bloom Filter with a single sub-filter for the initial capacity requested and with an upper bound `error_rate`. By default, the filter auto-scales by creating additional sub-filters when `capacity` is reached. The new sub-filter is created with size of the previous sub-filter multiplied by `expansion`. Though the filter can scale up by creating sub-filters, it is recommended to reserve the estimated required `capacity` since maintaining and querying sub-filters requires additional memory (each sub-filter uses an extra bits and hash function) and consume further CPU time than an equivalent filter that had the right capacity at creation time. The number of hash functions is `-log(error)/ln(2)^2`. The number of bits per item is `-log(error)/ln(2)` ≈ 1.44. * **1%** error rate requires 7 hash functions and 10.08 bits per item. * **0.1%** error rate requires 10 hash functions and 14.4 bits per item. * **0.01%** error rate requires 14 hash functions and 20.16 bits per item. ### Parameters: * **key**: The key under which the filter is found * **error\_rate**: The desired probability for false positives. The rate is a decimal value between 0 and 1. For example, for a desired false positive rate of 0.1% (1 in 1000), error\_rate should be set to 0.001. * **capacity**: The number of entries intended to be added to the filter. If your filter allows scaling, performance will begin to degrade after adding more items than this number. The actual degradation depends on how far the limit has been exceeded. Performance degrades linearly with the number of `sub-filters`. Optional parameters: * **NONSCALING**: Prevents the filter from creating additional sub-filters if initial capacity is reached. Non-scaling filters requires slightly less memory than their scaling counterparts. The filter returns an error when `capacity` is reached. * **EXPANSION**: When `capacity` is reached, an additional sub-filter is created. The size of the new sub-filter is the size of the last sub-filter multiplied by `expansion`. If the number of elements to be stored in the filter is unknown, we recommend that you use an `expansion` of 2 or more to reduce the number of sub-filters. Otherwise, we recommend that you use an `expansion` of 1 to reduce memory consumption. The default expansion value is 2. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) - `OK` if executed correctly, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) otherwise. Examples -------- ``` redis> BF.RESERVE bf 0.01 1000 OK ``` ``` redis> BF.RESERVE bf 0.01 1000 (error) ERR item exists ``` ``` redis> BF.RESERVE bf_exp 0.01 1000 EXPANSION 2 OK ``` ``` redis> BF.RESERVE bf_non 0.01 1000 NONSCALING OK ```
programming_docs
redis SUNSUBSCRIBE SUNSUBSCRIBE ============ ``` SUNSUBSCRIBE ``` Syntax ``` SUNSUBSCRIBE [shardchannel [shardchannel ...]] ``` Available since: 7.0.0 Time complexity: O(N) where N is the number of clients already subscribed to a shard channel. ACL categories: `@pubsub`, `@slow`, Unsubscribes the client from the given shard channels, or from all of them if none is given. When no shard channels are specified, the client is unsubscribed from all the previously subscribed shard channels. In this case a message for every unsubscribed shard channel will be sent to the client. Note: The global channels and shard channels needs to be unsubscribed from separately. For more information about sharded Pub/Sub, see [Sharded Pub/Sub](https://redis.io/topics/pubsub#sharded-pubsub). Return ------ When successful, this command doesn't return anything. Instead, for each shard channel, one message with the first element being the string "sunsubscribe" is pushed as a confirmation that the command succeeded. redis CF.COUNT CF.COUNT ======== ``` CF.COUNT ``` Syntax ``` CF.COUNT key item ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(k), where k is the number of sub-filters Returns the number of times an item may be in the filter. Because this is a probabilistic data structure, this may not necessarily be accurate. If you just want to know if an item exists in the filter, use [`CF.EXISTS`](../cf.exists) because it is more efficient for that purpose. ### Parameters * **key**: The name of the filter * **item**: The item to count Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - with the count of possible matching copies of the item in the filter. Examples -------- ``` redis> CF.COUNT cf item1 (integer) 42 ``` ``` redis> CF.COUNT cf item_new (integer) 0 ``` redis ZRANK ZRANK ===== ``` ZRANK ``` Syntax ``` ZRANK key member [WITHSCORE] ``` Available since: 2.0.0 Time complexity: O(log(N)) ACL categories: `@read`, `@sortedset`, `@fast`, Returns the rank of `member` in the sorted set stored at `key`, with the scores ordered from low to high. The rank (or index) is 0-based, which means that the member with the lowest score has rank `0`. The optional `WITHSCORE` argument supplements the command's reply with the score of the element returned. Use [`ZREVRANK`](../zrevrank) to get the rank of an element with the scores ordered from high to low. Return ------ * If `member` exists in the sorted set: + using `WITHSCORE`, [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): an array containing the rank and score of `member`. + without using `WITHSCORE`, [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the rank of `member`. * If `member` does not exist in the sorted set or `key` does not exist: + using `WITHSCORE`, [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): `nil`. + without using `WITHSCORE`, [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): `nil`. Note that in RESP3 null and nullarray are the same, but in RESP2 they are not. Examples -------- ``` ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZRANK myzset "three" ZRANK myzset "four" ZRANK myzset "three" WITHSCORE ZRANK myzset "four" WITHSCORE ``` History ------- * Starting with Redis version 7.2.0: Added the optional `WITHSCORE` argument. redis TDIGEST.BYREVRANK TDIGEST.BYREVRANK ================= ``` TDIGEST.BYREVRANK ``` Syntax ``` TDIGEST.BYREVRANK key reverse_rank [reverse_rank ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.4.0](https://redis.io/docs/stack/bloom) Time complexity: O(1) Returns, for each input reverse rank, an estimation of the value (floating-point) with that reverse rank. Multiple estimations can be retrieved in a signle call. Required arguments ------------------ `key` is key name for an existing t-digest sketch. `revrank` Reverse rank, for which the value should be retrieved. 0 is the reverse rank of the value of the largest observation. *n*-1 is the reverse rank of the value of the smallest observation; *n* denotes the number of observations added to the sketch. Return value ------------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) - an array of floating-points populated with value\_1, value\_2, ..., value\_R: * Return an accurate result when `revrank` is 0 (the value of the largest observation) * Return an accurate result when `revrank` is *n*-1 (the value of the smallest observation), where *n* denotes the number of observations added to the sketch. * Return '-inf' when `revrank` is equal to *n* or larger than *n* All values are 'nan' if the sketch is empty. Examples -------- ``` redis> TDIGEST.CREATE t COMPRESSION 1000 OK redis> TDIGEST.ADD t 1 2 2 3 3 3 4 4 4 4 5 5 5 5 5 OK redis> TDIGEST.BYREVRANK t 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1) "5" 2) "5" 3) "5" 4) "5" 5) "5" 6) "4" 7) "4" 8) "4" 9) "4" 10) "3" 11) "3" 12) "3" 13) "2" 14) "2" 15) "1" 16) "-inf" ``` redis PTTL PTTL ==== ``` PTTL ``` Syntax ``` PTTL key ``` Available since: 2.6.0 Time complexity: O(1) ACL categories: `@keyspace`, `@read`, `@fast`, Like [`TTL`](../ttl) this command returns the remaining time to live of a key that has an expire set, with the sole difference that [`TTL`](../ttl) returns the amount of remaining time in seconds while `PTTL` returns it in milliseconds. In Redis 2.6 or older the command returns `-1` if the key does not exist or if the key exist but has no associated expire. Starting with Redis 2.8 the return value in case of error changed: * The command returns `-2` if the key does not exist. * The command returns `-1` if the key exists but has no associated expire. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): TTL in milliseconds, or a negative value in order to signal an error (see the description above). Examples -------- ``` SET mykey "Hello" EXPIRE mykey 1 PTTL mykey ``` History ------- * Starting with Redis version 2.8.0: Added the -2 reply. redis TS.CREATE TS.CREATE ========= ``` TS.CREATE ``` Syntax ``` TS.CREATE key [RETENTION retentionPeriod] [ENCODING [UNCOMPRESSED|COMPRESSED]] [CHUNK_SIZE size] [DUPLICATE_POLICY policy] [LABELS {label value}...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [TimeSeries 1.0.0](https://redis.io/docs/stack/timeseries) Time complexity: O(1) Create a new time series [Examples](#examples) Required arguments ------------------ `key` is key name for the time series. **Notes:** * If a key already exists, you get a Redis error reply, `TSDB: key already exists`. You can check for the existence of a key with the [`EXISTS`](../exists) command. * Other commands that also create a new time series when called with a key that does not exist are [`TS.ADD`](../ts.add), [`TS.INCRBY`](../ts.incrby), and [`TS.DECRBY`](../ts.decrby). Optional arguments ------------------ `RETENTION retentionPeriod` is maximum age for samples compared to the highest reported timestamp, in milliseconds. Samples are expired based solely on the difference between their timestamp and the timestamps passed to subsequent [`TS.ADD`](../ts.add), [`TS.MADD`](../ts.madd), [`TS.INCRBY`](../ts.incrby), and [`TS.DECRBY`](../ts.decrby) calls with this key. When set to 0, samples never expire. When not specified, the option is set to the global [RETENTION\_POLICY](https://redis.io/docs/stack/timeseries/configuration/#retention_policy) configuration of the database, which by default is 0. `ENCODING enc` specifies the series samples encoding format as one of the following values: * `COMPRESSED`, applies compression to the series samples. * `UNCOMPRESSED`, keeps the raw samples in memory. Adding this flag keeps data in an uncompressed form. `COMPRESSED` is almost always the right choice. Compression not only saves memory but usually improves performance due to a lower number of memory accesses. It can result in about 90% memory reduction. The exception are highly irregular timestamps or values, which occur rarely. When not specified, the option is set to `COMPRESSED`. `CHUNK_SIZE size` is initial allocation size, in bytes, for the data part of each new chunk. Actual chunks may consume more memory. Changing chunkSize (using [`TS.ALTER`](../ts.alter)) does not affect existing chunks. Must be a multiple of 8 in the range [48 .. 1048576]. When not specified, it is set to the global [CHUNK\_SIZE\_BYTES](https://redis.io/docs/stack/timeseries/configuration/#chunk_size_bytes) configuration of the database, which by default is 4096 (a single memory page). Note: Before v1.6.10 no minimum was enforced. Between v1.6.10 and v1.6.17 and in v1.8.0 the minimum value was 128. Since v1.8.1 the minimum value is 48. The data in each key is stored in chunks. Each chunk contains header and data for a given timeframe. An index contains all chunks. Iterations occur inside each chunk. Depending on your use case, consider these tradeoffs for having smaller or larger sizes of chunks: * Insert performance: Smaller chunks result in slower inserts (more chunks need to be created). * Query performance: Queries for a small subset when the chunks are very large are slower, as we need to iterate over the chunk to find the data. * Larger chunks may take more memory when you have a very large number of keys and very few samples per key, or less memory when you have many samples per key. If you are unsure about your use case, select the default. `DUPLICATE_POLICY policy` is policy for handling insertion ([`TS.ADD`](../ts.add) and [`TS.MADD`](../ts.madd)) of multiple samples with identical timestamps, with one of the following values: * `BLOCK`: ignore any newly reported value and reply with an error * `FIRST`: ignore any newly reported value * `LAST`: override with the newly reported value * `MIN`: only override if the value is lower than the existing value * `MAX`: only override if the value is higher than the existing value * `SUM`: If a previous sample exists, add the new sample to it so that the updated value is equal to (previous + new). If no previous sample exists, set the updated value equal to the new value. When not specified: set to the global [DUPLICATE\_POLICY](https://redis.io/docs/stack/timeseries/configuration/#duplicate_policy) configuration of the database (which, by default, is `BLOCK`). `LABELS {label value}...` is set of label-value pairs that represent metadata labels of the key and serve as a secondary index. The [`TS.MGET`](../ts.mget), [`TS.MRANGE`](../ts.mrange), and [`TS.MREVRANGE`](../ts.mrevrange) commands operate on multiple time series based on their labels. The [`TS.QUERYINDEX`](../ts.queryindex) command returns all time series keys matching a given filter based on their labels. Return value ------------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) - `OK` if executed correctly, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) otherwise. Examples -------- **Create a temperature time series** ``` 127.0.0.1:6379> TS.CREATE temperature:2:32 RETENTION 60000 DUPLICATE\_POLICY MAX LABELS sensor\_id 2 area\_id 32 OK ``` See also -------- [`TS.ADD`](../ts.add) | [`TS.INCRBY`](../ts.incrby) | [`TS.DECRBY`](../ts.decrby) | [`TS.MGET`](../ts.mget) | [`TS.MRANGE`](../ts.mrange) | [`TS.MREVRANGE`](../ts.mrevrange) | [`TS.QUERYINDEX`](../ts.queryindex) Related topics -------------- * [RedisTimeSeries](https://redis.io/docs/stack/timeseries) * [RedisTimeSeries Version 1.2 Is Here!](https://redis.com/blog/redistimeseries-version-1-2-is-here/) redis MEMORY MEMORY ====== ``` MEMORY PURGE ``` Syntax ``` MEMORY PURGE ``` Available since: 4.0.0 Time complexity: Depends on how much memory is allocated, could be slow ACL categories: `@slow`, The `MEMORY PURGE` command attempts to purge dirty pages so these can be reclaimed by the allocator. This command is currently implemented only when using **jemalloc** as an allocator, and evaluates to a benign NOOP for all others. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) redis CLIENT CLIENT ====== ``` CLIENT TRACKING ``` Syntax ``` CLIENT TRACKING <ON | OFF> [REDIRECT client-id] [PREFIX prefix [PREFIX prefix ...]] [BCAST] [OPTIN] [OPTOUT] [NOLOOP] ``` Available since: 6.0.0 Time complexity: O(1). Some options may introduce additional complexity. ACL categories: `@slow`, `@connection`, This command enables the tracking feature of the Redis server, that is used for [server assisted client side caching](https://redis.io/topics/client-side-caching). When tracking is enabled Redis remembers the keys that the connection requested, in order to send later invalidation messages when such keys are modified. Invalidation messages are sent in the same connection (only available when the RESP3 protocol is used) or redirected in a different connection (available also with RESP2 and Pub/Sub). A special *broadcasting* mode is available where clients participating in this protocol receive every notification just subscribing to given key prefixes, regardless of the keys that they requested. Given the complexity of the argument please refer to [the main client side caching documentation](https://redis.io/topics/client-side-caching) for the details. This manual page is only a reference for the options of this subcommand. In order to enable tracking, use: ``` CLIENT TRACKING on ... options ... ``` The feature will remain active in the current connection for all its life, unless tracking is turned off with `CLIENT TRACKING off` at some point. The following are the list of options that modify the behavior of the command when enabling tracking: * `REDIRECT <id>`: send invalidation messages to the connection with the specified ID. The connection must exist. You can get the ID of a connection using [`CLIENT ID`](../client-id). If the connection we are redirecting to is terminated, when in RESP3 mode the connection with tracking enabled will receive `tracking-redir-broken` push messages in order to signal the condition. * `BCAST`: enable tracking in broadcasting mode. In this mode invalidation messages are reported for all the prefixes specified, regardless of the keys requested by the connection. Instead when the broadcasting mode is not enabled, Redis will track which keys are fetched using read-only commands, and will report invalidation messages only for such keys. * `PREFIX <prefix>`: for broadcasting, register a given key prefix, so that notifications will be provided only for keys starting with this string. This option can be given multiple times to register multiple prefixes. If broadcasting is enabled without this option, Redis will send notifications for every key. You can't delete a single prefix, but you can delete all prefixes by disabling and re-enabling tracking. Using this option adds the additional time complexity of O(N^2), where N is the total number of prefixes tracked. * `OPTIN`: when broadcasting is NOT active, normally don't track keys in read only commands, unless they are called immediately after a `CLIENT CACHING yes` command. * `OPTOUT`: when broadcasting is NOT active, normally track keys in read only commands, unless they are called immediately after a `CLIENT CACHING no` command. * `NOLOOP`: don't send notifications about keys modified by this connection itself. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if the connection was successfully put in tracking mode or if the tracking mode was successfully disabled. Otherwise an error is returned. redis MODULE MODULE ====== ``` MODULE LOAD ``` Syntax ``` MODULE LOAD path [arg [arg ...]] ``` Available since: 4.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, Loads a module from a dynamic library at runtime. This command loads and initializes the Redis module from the dynamic library specified by the `path` argument. The `path` should be the absolute path of the library, including the full filename. Any additional arguments are passed unmodified to the module. **Note**: modules can also be loaded at server startup with `loadmodule` configuration directive in `redis.conf`. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if module was loaded. redis GETBIT GETBIT ====== ``` GETBIT ``` Syntax ``` GETBIT key offset ``` Available since: 2.2.0 Time complexity: O(1) ACL categories: `@read`, `@bitmap`, `@fast`, Returns the bit value at *offset* in the string value stored at *key*. When *offset* is beyond the string length, the string is assumed to be a contiguous space with 0 bits. When *key* does not exist it is assumed to be an empty string, so *offset* is always out of range and the value is also assumed to be a contiguous space with 0 bits. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the bit value stored at *offset*. Examples -------- ``` SETBIT mykey 7 1 GETBIT mykey 0 GETBIT mykey 7 GETBIT mykey 100 ``` redis ZINCRBY ZINCRBY ======= ``` ZINCRBY ``` Syntax ``` ZINCRBY key increment member ``` Available since: 1.2.0 Time complexity: O(log(N)) where N is the number of elements in the sorted set. ACL categories: `@write`, `@sortedset`, `@fast`, Increments the score of `member` in the sorted set stored at `key` by `increment`. If `member` does not exist in the sorted set, it is added with `increment` as its score (as if its previous score was `0.0`). If `key` does not exist, a new sorted set with the specified `member` as its sole member is created. An error is returned when `key` exists but does not hold a sorted set. The `score` value should be the string representation of a numeric value, and accepts double precision floating point numbers. It is possible to provide a negative value to decrement the score. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the new score of `member` (a double precision floating point number), represented as string. Examples -------- ``` ZADD myzset 1 "one" ZADD myzset 2 "two" ZINCRBY myzset 2 "one" ZRANGE myzset 0 -1 WITHSCORES ``` redis UNWATCH UNWATCH ======= ``` UNWATCH ``` Syntax ``` UNWATCH ``` Available since: 2.2.0 Time complexity: O(1) ACL categories: `@fast`, `@transaction`, Flushes all the previously watched keys for a [transaction](https://redis.io/topics/transactions). If you call [`EXEC`](../exec) or [`DISCARD`](../discard), there's no need to manually call `UNWATCH`. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): always `OK`. redis PUBSUB PUBSUB ====== ``` PUBSUB CHANNELS ``` Syntax ``` PUBSUB CHANNELS [pattern] ``` Available since: 2.8.0 Time complexity: O(N) where N is the number of active channels, and assuming constant time pattern matching (relatively short channels and patterns) ACL categories: `@pubsub`, `@slow`, Lists the currently *active channels*. An active channel is a Pub/Sub channel with one or more subscribers (excluding clients subscribed to patterns). If no `pattern` is specified, all the channels are listed, otherwise if pattern is specified only channels matching the specified glob-style pattern are listed. Cluster note: in a Redis Cluster clients can subscribe to every node, and can also publish to every other node. The cluster will make sure that published messages are forwarded as needed. That said, [`PUBSUB`](../pubsub)'s replies in a cluster only report information from the node's Pub/Sub context, rather than the entire cluster. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a list of active channels, optionally matching the specified pattern.
programming_docs
redis PFCOUNT PFCOUNT ======= ``` PFCOUNT ``` Syntax ``` PFCOUNT key [key ...] ``` Available since: 2.8.9 Time complexity: O(1) with a very small average constant time when called with a single key. O(N) with N being the number of keys, and much bigger constant times, when called with multiple keys. ACL categories: `@read`, `@hyperloglog`, `@slow`, When called with a single key, returns the approximated cardinality computed by the HyperLogLog data structure stored at the specified variable, which is 0 if the variable does not exist. When called with multiple keys, returns the approximated cardinality of the union of the HyperLogLogs passed, by internally merging the HyperLogLogs stored at the provided keys into a temporary HyperLogLog. The HyperLogLog data structure can be used in order to count **unique** elements in a set using just a small constant amount of memory, specifically 12k bytes for every HyperLogLog (plus a few bytes for the key itself). The returned cardinality of the observed set is not exact, but approximated with a standard error of 0.81%. For example in order to take the count of all the unique search queries performed in a day, a program needs to call [`PFADD`](../pfadd) every time a query is processed. The estimated number of unique queries can be retrieved with `PFCOUNT` at any time. Note: as a side effect of calling this function, it is possible that the HyperLogLog is modified, since the last 8 bytes encode the latest computed cardinality for caching purposes. So `PFCOUNT` is technically a write command. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: * The approximated number of unique elements observed via [`PFADD`](../pfadd). Examples -------- ``` PFADD hll foo bar zap PFADD hll zap zap zap PFADD hll foo bar PFCOUNT hll PFADD some-other-hll 1 2 3 PFCOUNT hll some-other-hll ``` Performances ------------ When `PFCOUNT` is called with a single key, performances are excellent even if in theory constant times to process a dense HyperLogLog are high. This is possible because the `PFCOUNT` uses caching in order to remember the cardinality previously computed, that rarely changes because most [`PFADD`](../pfadd) operations will not update any register. Hundreds of operations per second are possible. When `PFCOUNT` is called with multiple keys, an on-the-fly merge of the HyperLogLogs is performed, which is slow, moreover the cardinality of the union can't be cached, so when used with multiple keys `PFCOUNT` may take a time in the order of magnitude of the millisecond, and should be not abused. The user should take in mind that single-key and multiple-keys executions of this command are semantically different and have different performances. HyperLogLog representation -------------------------- Redis HyperLogLogs are represented using a double representation: the *sparse* representation suitable for HLLs counting a small number of elements (resulting in a small number of registers set to non-zero value), and a *dense* representation suitable for higher cardinalities. Redis automatically switches from the sparse to the dense representation when needed. The sparse representation uses a run-length encoding optimized to store efficiently a big number of registers set to zero. The dense representation is a Redis string of 12288 bytes in order to store 16384 6-bit counters. The need for the double representation comes from the fact that using 12k (which is the dense representation memory requirement) to encode just a few registers for smaller cardinalities is extremely suboptimal. Both representations are prefixed with a 16 bytes header, that includes a magic, an encoding / version field, and the cached cardinality estimation computed, stored in little endian format (the most significant bit is 1 if the estimation is invalid since the HyperLogLog was updated since the cardinality was computed). The HyperLogLog, being a Redis string, can be retrieved with [`GET`](../get) and restored with [`SET`](../set). Calling [`PFADD`](../pfadd), `PFCOUNT` or [`PFMERGE`](../pfmerge) commands with a corrupted HyperLogLog is never a problem, it may return random values but does not affect the stability of the server. Most of the times when corrupting a sparse representation, the server recognizes the corruption and returns an error. The representation is neutral from the point of view of the processor word size and endianness, so the same representation is used by 32 bit and 64 bit processor, big endian or little endian. More details about the Redis HyperLogLog implementation can be found in [this blog post](http://antirez.com/news/75). The source code of the implementation in the `hyperloglog.c` file is also easy to read and understand, and includes a full specification for the exact encoding used for the sparse and dense representations. redis XLEN XLEN ==== ``` XLEN ``` Syntax ``` XLEN key ``` Available since: 5.0.0 Time complexity: O(1) ACL categories: `@read`, `@stream`, `@fast`, Returns the number of entries inside a stream. If the specified key does not exist the command returns zero, as if the stream was empty. However note that unlike other Redis types, zero-length streams are possible, so you should call [`TYPE`](../type) or [`EXISTS`](../exists) in order to check if a key exists or not. Streams are not auto-deleted once they have no entries inside (for instance after an [`XDEL`](../xdel) call), because the stream may have consumer groups associated with it. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of entries of the stream at `key`. Examples -------- ``` XADD mystream * item 1 XADD mystream * item 2 XADD mystream * item 3 XLEN mystream ``` redis SLOWLOG SLOWLOG ======= ``` SLOWLOG RESET ``` Syntax ``` SLOWLOG RESET ``` Available since: 2.2.12 Time complexity: O(N) where N is the number of entries in the slowlog ACL categories: `@admin`, `@slow`, `@dangerous`, This command resets the slow log, clearing all entries in it. Once deleted the information is lost forever. @reply [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` redis PUBSUB PUBSUB ====== ``` PUBSUB SHARDNUMSUB ``` Syntax ``` PUBSUB SHARDNUMSUB [shardchannel [shardchannel ...]] ``` Available since: 7.0.0 Time complexity: O(N) for the SHARDNUMSUB subcommand, where N is the number of requested shard channels ACL categories: `@pubsub`, `@slow`, Returns the number of subscribers for the specified shard channels. Note that it is valid to call this command without channels, in this case it will just return an empty list. Cluster note: in a Redis Cluster, [`PUBSUB`](../pubsub)'s replies in a cluster only report information from the node's Pub/Sub context, rather than the entire cluster. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a list of channels and number of subscribers for every channel. The format is channel, count, channel, count, ..., so the list is flat. The order in which the channels are listed is the same as the order of the shard channels specified in the command call. Examples -------- ``` > PUBSUB SHARDNUMSUB orders 1) "orders" 2) (integer) 1 ``` redis TS.DECRBY TS.DECRBY ========= ``` TS.DECRBY ``` Syntax ``` TS.DECRBY key value [TIMESTAMP timestamp] [RETENTION retentionPeriod] [UNCOMPRESSED] [CHUNK_SIZE size] [LABELS {label value}...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [TimeSeries 1.0.0](https://redis.io/docs/stack/timeseries) Time complexity: O(M) when M is the amount of compaction rules or O(1) with no compaction Decrease the value of the sample with the maximum existing timestamp, or create a new sample with a value equal to the value of the sample with the maximum existing timestamp with a given decrement Required arguments ------------------ `key` is key name for the time series. `value` is numeric data value of the sample (double) **Notes** * When specified key does not exist, a new time series is created. * You can use this command as a counter or gauge that automatically gets history as a time series. * Explicitly adding samples to a compacted time series (using [`TS.ADD`](../ts.add), [`TS.MADD`](../ts.madd), [`TS.INCRBY`](../ts.incrby), or `TS.DECRBY`) may result in inconsistencies between the raw and the compacted data. The compaction process may override such samples. Optional arguments ------------------ `TIMESTAMP timestamp` is (integer) UNIX sample timestamp in milliseconds or `*` to set the timestamp according to the server clock. `timestamp` must be equal to or higher than the maximum existing timestamp. When equal, the value of the sample with the maximum existing timestamp is decreased. If it is higher, a new sample with a timestamp set to `timestamp` is created, and its value is set to the value of the sample with the maximum existing timestamp minus `value`. If the time series is empty, the value is set to `value`. When not specified, the timestamp is set according to the server clock. `RETENTION retentionPeriod` is maximum retention period, compared to the maximum existing timestamp, in milliseconds. Use it only if you are creating a new time series. It is ignored if you are adding samples to an existing time series. See `RETENTION` in [`TS.CREATE`](../ts.create). `UNCOMPRESSED` changes data storage from compressed (default) to uncompressed. Use it only if you are creating a new time series. It is ignored if you are adding samples to an existing time series. See `ENCODING` in [`TS.CREATE`](../ts.create). `CHUNK_SIZE size` is memory size, in bytes, allocated for each data chunk. Use it only if you are creating a new time series. It is ignored if you are adding samples to an existing time series. See `CHUNK_SIZE` in [`TS.CREATE`](../ts.create). `LABELS [{label value}...]` is set of label-value pairs that represent metadata labels of the key and serve as a secondary index. Use it only if you are creating a new time series. It is ignored if you are adding samples to an existing time series. See `LABELS` in [`TS.CREATE`](../ts.create). **Notes** * You can use this command to add data to a nonexisting time series in a single command. This is why `RETENTION`, `UNCOMPRESSED`, `CHUNK_SIZE`, and `LABELS` are optional arguments. * When specified and the key doesn't exist, a new time series is created. Setting the `RETENTION` and `LABELS` introduces additional time complexity. Return value ------------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - the timestamp of the upserted sample, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors). See also -------- [`TS.INCRBY`](../ts.incrby) | [`TS.CREATE`](../ts.create) Related topics -------------- [RedisTimeSeries](https://redis.io/docs/stack/timeseries) redis SISMEMBER SISMEMBER ========= ``` SISMEMBER ``` Syntax ``` SISMEMBER key member ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@read`, `@set`, `@fast`, Returns if `member` is a member of the set stored at `key`. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: * `1` if the element is a member of the set. * `0` if the element is not a member of the set, or if `key` does not exist. Examples -------- ``` SADD myset "one" SISMEMBER myset "one" SISMEMBER myset "two" ``` redis SMOVE SMOVE ===== ``` SMOVE ``` Syntax ``` SMOVE source destination member ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@write`, `@set`, `@fast`, Move `member` from the set at `source` to the set at `destination`. This operation is atomic. In every given moment the element will appear to be a member of `source` **or** `destination` for other clients. If the source set does not exist or does not contain the specified element, no operation is performed and `0` is returned. Otherwise, the element is removed from the source set and added to the destination set. When the specified element already exists in the destination set, it is only removed from the source set. An error is returned if `source` or `destination` does not hold a set value. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: * `1` if the element is moved. * `0` if the element is not a member of `source` and no operation was performed. Examples -------- ``` SADD myset "one" SADD myset "two" SADD myotherset "three" SMOVE myset myotherset "two" SMEMBERS myset SMEMBERS myotherset ``` redis JSON.NUMMULTBY JSON.NUMMULTBY ============== ``` JSON.NUMMULTBY (deprecated) ``` As of Redis version 2.0, this command is regarded as deprecated. Syntax ``` JSON.NUMMULTBY key path value ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: O(1) when path is evaluated to a single value, O(N) when path is evaluated to multiple values, where N is the size of the key Multiply the number value stored at `path` by `number` [Examples](#examples) Required arguments ------------------ `key` is key to modify. `value` is number value to multiply. Optional arguments ------------------ `path` is JSONPath to specify. Default is root `$`. Return ------ JSON.NUMMULTBY returns a bulk string reply specified as a stringified new values for each path, or `nil` element if the matching JSON value is not a number. For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Examples -------- ``` 127.0.0.1:6379> JSON.SET doc . '{"a":"b","b":[{"a":2}, {"a":5}, {"a":"c"}]}' OK 127.0.0.1:6379> JSON.NUMMULTBY doc $.a 2 "[null]" 127.0.0.1:6379> JSON.NUMMULTBY doc $..a 2 "[null,4,10,null]" ``` See also -------- [`JSON.NUMINCRBY`](../json.numincrby) | [`JSON.ARRINSERT`](../json.arrinsert) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json) redis EXEC EXEC ==== ``` EXEC ``` Syntax ``` EXEC ``` Available since: 1.2.0 Time complexity: Depends on commands in the transaction ACL categories: `@slow`, `@transaction`, Executes all previously queued commands in a [transaction](https://redis.io/topics/transactions) and restores the connection state to normal. When using [`WATCH`](../watch), `EXEC` will execute commands only if the watched keys were not modified, allowing for a [check-and-set mechanism](https://redis.io/topics/transactions#cas). Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): each element being the reply to each of the commands in the atomic transaction. When using [`WATCH`](../watch), `EXEC` can return a [Null reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings) if the execution was aborted. redis COMMAND COMMAND ======= ``` COMMAND LIST ``` Syntax ``` COMMAND LIST [FILTERBY <MODULE module-name | ACLCAT category | PATTERN pattern>] ``` Available since: 7.0.0 Time complexity: O(N) where N is the total number of Redis commands ACL categories: `@slow`, `@connection`, Return an array of the server's command names. You can use the optional *FILTERBY* modifier to apply one of the following filters: * **MODULE module-name**: get the commands that belong to the module specified by *module-name*. * **ACLCAT category**: get the commands in the [ACL category](https://redis.io/docs/management/security/acl/#command-categories) specified by *category*. * **PATTERN pattern**: get the commands that match the given glob-like *pattern*. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a list of command names. redis CLUSTER CLUSTER ======= ``` CLUSTER SHARDS ``` Syntax ``` CLUSTER SHARDS ``` Available since: 7.0.0 Time complexity: O(N) where N is the total number of cluster nodes ACL categories: `@slow`, `CLUSTER SHARDS` returns details about the shards of the cluster. A shard is defined as a collection of nodes that serve the same set of slots and that replicate from each other. A shard may only have a single master at a given time, but may have multiple or no replicas. It is possible for a shard to not be serving any slots while still having replicas. This command replaces the [`CLUSTER SLOTS`](../cluster-slots) command, by providing a more efficient and extensible representation of the cluster. The command is suitable to be used by Redis Cluster client libraries in order to understand the topology of the cluster. A client should issue this command on startup in order to retrieve the map associating cluster *hash slots* with actual node information. This map should be used to direct commands to the node that is likely serving the slot associated with a given command. In the event the command is sent to the wrong node, in that it received a '-MOVED' redirect, this command can then be used to update the topology of the cluster. The command returns an array of shards, with each shard containing two fields, 'slots' and 'nodes'. The 'slots' field is a list of slot ranges served by this shard, stored as pair of integers representing the inclusive start and end slots of the ranges. For example, if a node owns the slots 1, 2, 3, 5, 7, 8 and 9, the slots ranges would be stored as [1-3], [5-5], [7-9]. The slots field would therefore be represented by the following list of integers. ``` 1) 1) "slots" 2) 1) (integer) 1 2) (integer) 3 3) (integer) 5 4) (integer) 5 5) (integer) 7 6) (integer) 9 ``` The 'nodes' field contains a list of all nodes within the shard. Each individual node is a map of attributes that describe the node. Some attributes are optional and more attributes may be added in the future. The current list of attributes: * id: The unique node id for this particular node. * endpoint: The preferred endpoint to reach the node, see below for more information about the possible values of this field. * ip: The IP address to send requests to for this node. * hostname (optional): The announced hostname to send requests to for this node. * port (optional): The TCP (non-TLS) port of the node. At least one of port or tls-port will be present. * tls-port (optional): The TLS port of the node. At least one of port or tls-port will be present. * role: The replication role of this node. * replication-offset: The replication offset of this node. This information can be used to send commands to the most up to date replicas. * health: Either `online`, `failed`, or `loading`. This information should be used to determine which nodes should be sent traffic. The `loading` health state should be used to know that a node is not currently eligible to serve traffic, but may be eligible in the future. The endpoint, along with the port, defines the location that clients should use to send requests for a given slot. A NULL value for the endpoint indicates the node has an unknown endpoint and the client should connect to the same endpoint it used to send the `CLUSTER SHARDS` command but with the port returned from the command. This unknown endpoint configuration is useful when the Redis nodes are behind a load balancer that Redis doesn't know the endpoint of. Which endpoint is set is determined by the `cluster-preferred-endpoint-type` config. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): nested list of a map of hash ranges and shard nodes. Examples -------- ``` > CLUSTER SHARDS 1) 1) "slots" 2) 1) (integer) 0 2) (integer) 5460 3) "nodes" 4) 1) 1) "id" 2) "e10b7051d6bf2d5febd39a2be297bbaea6084111" 3) "port" 4) (integer) 30001 5) "ip" 6) "127.0.0.1" 7) "endpoint" 8) "127.0.0.1" 9) "role" 10) "master" 11) "replication-offset" 12) (integer) 72156 13) "health" 14) "online" 2) 1) "id" 2) "1901f5962d865341e81c85f9f596b1e7160c35ce" 3) "port" 4) (integer) 30006 5) "ip" 6) "127.0.0.1" 7) "endpoint" 8) "127.0.0.1" 9) "role" 10) "replica" 11) "replication-offset" 12) (integer) 72156 13) "health" 14) "online" 2) 1) "slots" 2) 1) (integer) 10923 2) (integer) 16383 3) "nodes" 4) 1) 1) "id" 2) "fd20502fe1b32fc32c15b69b0a9537551f162f1f" 3) "port" 4) (integer) 30003 5) "ip" 6) "127.0.0.1" 7) "endpoint" 8) "127.0.0.1" 9) "role" 10) "master" 11) "replication-offset" 12) (integer) 72156 13) "health" 14) "online" 2) 1) "id" 2) "6daa25c08025a0c7e4cc0d1ab255949ce6cee902" 3) "port" 4) (integer) 30005 5) "ip" 6) "127.0.0.1" 7) "endpoint" 8) "127.0.0.1" 9) "role" 10) "replica" 11) "replication-offset" 12) (integer) 72156 13) "health" 14) "online" 3) 1) "slots" 2) 1) (integer) 5461 2) (integer) 10922 3) "nodes" 4) 1) 1) "id" 2) "a4a3f445ead085eb3eb9ee7d8c644ec4481ec9be" 3) "port" 4) (integer) 30002 5) "ip" 6) "127.0.0.1" 7) "endpoint" 8) "127.0.0.1" 9) "role" 10) "master" 11) "replication-offset" 12) (integer) 72156 13) "health" 14) "online" 2) 1) "id" 2) "da6d5847aa019e9b9d2a8aa24a75f856fd3456cc" 3) "port" 4) (integer) 30004 5) "ip" 6) "127.0.0.1" 7) "endpoint" 8) "127.0.0.1" 9) "role" 10) "replica" 11) "replication-offset" 12) (integer) 72156 13) "health" 14) "online" ```
programming_docs
redis RPOPLPUSH RPOPLPUSH ========= ``` RPOPLPUSH (deprecated) ``` As of Redis version 6.2.0, this command is regarded as deprecated. It can be replaced by [`LMOVE`](../lmove) with the `RIGHT` and `LEFT` arguments when migrating or writing new code. Syntax ``` RPOPLPUSH source destination ``` Available since: 1.2.0 Time complexity: O(1) ACL categories: `@write`, `@list`, `@slow`, Atomically returns and removes the last element (tail) of the list stored at `source`, and pushes the element at the first element (head) of the list stored at `destination`. For example: consider `source` holding the list `a,b,c`, and `destination` holding the list `x,y,z`. Executing `RPOPLPUSH` results in `source` holding `a,b` and `destination` holding `c,x,y,z`. If `source` does not exist, the value `nil` is returned and no operation is performed. If `source` and `destination` are the same, the operation is equivalent to removing the last element from the list and pushing it as first element of the list, so it can be considered as a list rotation command. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the element being popped and pushed. Examples -------- ``` RPUSH mylist "one" RPUSH mylist "two" RPUSH mylist "three" RPOPLPUSH mylist myotherlist LRANGE mylist 0 -1 LRANGE myotherlist 0 -1 ``` Pattern: Reliable queue ----------------------- Redis is often used as a messaging server to implement processing of background jobs or other kinds of messaging tasks. A simple form of queue is often obtained pushing values into a list in the producer side, and waiting for this values in the consumer side using [`RPOP`](../rpop) (using polling), or [`BRPOP`](../brpop) if the client is better served by a blocking operation. However in this context the obtained queue is not *reliable* as messages can be lost, for example in the case there is a network problem or if the consumer crashes just after the message is received but before it can be processed. `RPOPLPUSH` (or [`BRPOPLPUSH`](../brpoplpush) for the blocking variant) offers a way to avoid this problem: the consumer fetches the message and at the same time pushes it into a *processing* list. It will use the [`LREM`](../lrem) command in order to remove the message from the *processing* list once the message has been processed. An additional client may monitor the *processing* list for items that remain there for too much time, pushing timed out items into the queue again if needed. Pattern: Circular list ---------------------- Using `RPOPLPUSH` with the same source and destination key, a client can visit all the elements of an N-elements list, one after the other, in O(N) without transferring the full list from the server to the client using a single [`LRANGE`](../lrange) operation. The above pattern works even if one or both of the following conditions occur: * There are multiple clients rotating the list: they'll fetch different elements, until all the elements of the list are visited, and the process restarts. * Other clients are actively pushing new items at the end of the list. The above makes it very simple to implement a system where a set of items must be processed by N workers continuously as fast as possible. An example is a monitoring system that must check that a set of web sites are reachable, with the smallest delay possible, using a number of parallel workers. Note that this implementation of workers is trivially scalable and reliable, because even if a message is lost the item is still in the queue and will be processed at the next iteration. redis CLUSTER CLUSTER ======= ``` CLUSTER NODES ``` Syntax ``` CLUSTER NODES ``` Available since: 3.0.0 Time complexity: O(N) where N is the total number of Cluster nodes ACL categories: `@slow`, Each node in a Redis Cluster has its view of the current cluster configuration, given by the set of known nodes, the state of the connection we have with such nodes, their flags, properties and assigned slots, and so forth. `CLUSTER NODES` provides all this information, that is, the current cluster configuration of the node we are contacting, in a serialization format which happens to be exactly the same as the one used by Redis Cluster itself in order to store on disk the cluster state (however the on disk cluster state has a few additional info appended at the end). Note that normally clients willing to fetch the map between Cluster hash slots and node addresses should use [`CLUSTER SLOTS`](../cluster-slots) instead. `CLUSTER NODES`, that provides more information, should be used for administrative tasks, debugging, and configuration inspections. It is also used by `redis-cli` in order to manage a cluster. Serialization format -------------------- The output of the command is just a space-separated CSV string, where each line represents a node in the cluster. Starting from 7.2.0, the output of the command always contains a new auxiliary field called shard-id. The following is an example of output on Redis 7.2.0. ``` 07c37dfeb235213a872192d90877d0cd55635b91 127.0.0.1:30004@31004,,shard-id=69bc080733d1355567173199cff4a6a039a2f024 slave e7d1eecce10fd6bb5eb35b9f99a514335d9ba9ca 0 1426238317239 4 connected 67ed2db8d677e59ec4a4cefb06858cf2a1a89fa1 127.0.0.1:30002@31002,,shard-id=114f6674a35b84949fe567f5dfd41415ee776261 master - 0 1426238316232 2 connected 5461-10922 292f8b365bb7edb5e285caf0b7e6ddc7265d2f4f 127.0.0.1:30003@31003,,shard-id=fdb36c73e72dd027bc19811b7c219ef6e55c550e master - 0 1426238318243 3 connected 10923-16383 6ec23923021cf3ffec47632106199cb7f496ce01 127.0.0.1:30005@31005,,shard-id=114f6674a35b84949fe567f5dfd41415ee776261 slave 67ed2db8d677e59ec4a4cefb06858cf2a1a89fa1 0 1426238316232 5 connected 824fe116063bc5fcf9f4ffd895bc17aee7731ac3 127.0.0.1:30006@31006,,shard-id=fdb36c73e72dd027bc19811b7c219ef6e55c550e slave 292f8b365bb7edb5e285caf0b7e6ddc7265d2f4f 0 1426238317741 6 connected e7d1eecce10fd6bb5eb35b9f99a514335d9ba9ca 127.0.0.1:30001@31001,,shard-id=69bc080733d1355567173199cff4a6a039a2f024 myself,master - 0 0 1 connected 0-5460 ``` Each line is composed of the following fields: ``` <id> <ip:port@cport[,hostname[,auxiliary_field=value]*]> <flags> <master> <ping-sent> <pong-recv> <config-epoch> <link-state> <slot> <slot> ... <slot> ``` The meaning of each filed is the following: 1. `id`: The node ID, a 40-character globally unique string generated when a node is created and never changed again (unless `CLUSTER RESET HARD` is used). 2. `ip:port@cport`: The node address that clients should contact to run queries. 3. `hostname`: A human readable string that can be configured via the `cluster-annouce-hostname` setting. The max length of the string is 256 characters, excluding the null terminator. The name can contain ASCII alphanumeric characters, '-', and '.' only. 4. `[,auxiliary_field=value]*`: A list of comma-separated key-value pairs that represent various node properties, such as `shard-id`. There is no intrinsic order among auxiliary fields. The auxiliary fields can appear at different position in the list from release to release. Both the key name and value can contain ASCII alphanumeric characters and the characters in `!#$%&()*+-.:;<>?@[]^_{|}~` only. Auxiliary fields are explained in detail in the section below. 5. `flags`: A list of comma separated flags: `myself`, `master`, `slave`, `fail?`, `fail`, `handshake`, `noaddr`, `nofailover`, `noflags`. Flags are explained below. 6. `master`: If the node is a replica, and the primary is known, the primary node ID, otherwise the "-" character. 7. `ping-sent`: Unix time at which the currently active ping was sent, or zero if there are no pending pings, in milliseconds. 8. `pong-recv`: Unix time the last pong was received, in milliseconds. 9. `config-epoch`: The configuration epoch (or version) of the current node (or of the current primary if the node is a replica). Each time there is a failover, a new, unique, monotonically increasing configuration epoch is created. If multiple nodes claim to serve the same hash slots, the one with the higher configuration epoch wins. 10. `link-state`: The state of the link used for the node-to-node cluster bus. Use this link to communicate with the node. Can be `connected` or `disconnected`. 11. `slot`: A hash slot number or range. Starting from argument number 9, but there may be up to 16384 entries in total (limit never reached). This is the list of hash slots served by this node. If the entry is just a number, it is parsed as such. If it is a range, it is in the form `start-end`, and means that the node is responsible for all the hash slots from `start` to `end` including the start and end values. Auxiliary fields are: * `shard-id`: a 40-character globally unique string generated when a node is created. A node's shard id changes only when the node joins a different shard via `cluster replicate` and there the node's shard id is updated to its primary's. Flags are: * `myself`: The node you are contacting. * `master`: Node is a primary. * `slave`: Node is a replica. * `fail?`: Node is in `PFAIL` state. Not reachable for the node you are contacting, but still logically reachable (not in `FAIL` state). * `fail`: Node is in `FAIL` state. It was not reachable for multiple nodes that promoted the `PFAIL` state to `FAIL`. * `handshake`: Untrusted node, we are handshaking. * `noaddr`: No address known for this node. * `nofailover`: Replica will not try to failover. * `noflags`: No flags at all. Notes on published config epochs -------------------------------- Replicas broadcast their primary's config epochs (in order to get an `UPDATE` message if they are found to be stale), so the real config epoch of the replica (which is meaningless more or less, since they don't serve hash slots) can be only obtained checking the node flagged as `myself`, which is the entry of the node we are asking to generate `CLUSTER NODES` output. The other replicas epochs reflect what they publish in heartbeat packets, which is, the configuration epoch of the primaries they are currently replicating. Special slot entries -------------------- Normally hash slots associated to a given node are in one of the following formats, as already explained above: 1. Single number: 3894 2. Range: 3900-4000 However node hash slots can be in a special state, used in order to communicate errors after a node restart (mismatch between the keys in the AOF/RDB file, and the node hash slots configuration), or when there is a resharding operation in progress. This two states are **importing** and **migrating**. The meaning of the two states is explained in the Redis Specification, however the gist of the two states is the following: * **Importing** slots are yet not part of the nodes hash slot, there is a migration in progress. The node will accept queries about these slots only if the `ASK` command is used. * **Migrating** slots are assigned to the node, but are being migrated to some other node. The node will accept queries if all the keys in the command exist already, otherwise it will emit what is called an **ASK redirection**, to force new keys creation directly in the importing node. Importing and migrating slots are emitted in the `CLUSTER NODES` output as follows: * **Importing slot:** `[slot_number-<-importing_from_node_id]` * **Migrating slot:** `[slot_number->-migrating_to_node_id]` The following are a few examples of importing and migrating slots: * `[93-<-292f8b365bb7edb5e285caf0b7e6ddc7265d2f4f]` * `[1002-<-67ed2db8d677e59ec4a4cefb06858cf2a1a89fa1]` * `[77->-e7d1eecce10fd6bb5eb35b9f99a514335d9ba9ca]` * `[16311->-292f8b365bb7edb5e285caf0b7e6ddc7265d2f4f]` Note that the format does not have any space, so `CLUSTER NODES` output format is plain CSV with space as separator even when this special slots are emitted. However a complete parser for the format should be able to handle them. Note that: 1. Migration and importing slots are only added to the node flagged as `myself`. This information is local to a node, for its own slots. 2. Importing and migrating slots are provided as **additional info**. If the node has a given hash slot assigned, it will be also a plain number in the list of hash slots, so clients that don't have a clue about hash slots migrations can just skip this special fields. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): The serialized cluster configuration. **A note about the word slave used in this man page and command name**: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated. redis CLUSTER CLUSTER ======= ``` CLUSTER KEYSLOT ``` Syntax ``` CLUSTER KEYSLOT key ``` Available since: 3.0.0 Time complexity: O(N) where N is the number of bytes in the key ACL categories: `@slow`, Returns an integer identifying the hash slot the specified key hashes to. This command is mainly useful for debugging and testing, since it exposes via an API the underlying Redis implementation of the hashing algorithm. Example use cases for this command: 1. Client libraries may use Redis in order to test their own hashing algorithm, generating random keys and hashing them with both their local implementation and using Redis `CLUSTER KEYSLOT` command, then checking if the result is the same. 2. Humans may use this command in order to check what is the hash slot, and then the associated Redis Cluster node, responsible for a given key. Example ------- ``` > CLUSTER KEYSLOT somekey (integer) 11058 > CLUSTER KEYSLOT foo{hash_tag} (integer) 2515 > CLUSTER KEYSLOT bar{hash_tag} (integer) 2515 ``` Note that the command implements the full hashing algorithm, including support for **hash tags**, that is the special property of Redis Cluster key hashing algorithm, of hashing just what is between `{` and `}` if such a pattern is found inside the key name, in order to force multiple keys to be handled by the same node. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): The hash slot number. redis BITCOUNT BITCOUNT ======== ``` BITCOUNT ``` Syntax ``` BITCOUNT key [start end [BYTE | BIT]] ``` Available since: 2.6.0 Time complexity: O(N) ACL categories: `@read`, `@bitmap`, `@slow`, Count the number of set bits (population counting) in a string. By default all the bytes contained in the string are examined. It is possible to specify the counting operation only in an interval passing the additional arguments *start* and *end*. Like for the [`GETRANGE`](../getrange) command start and end can contain negative values in order to index bytes starting from the end of the string, where -1 is the last byte, -2 is the penultimate, and so forth. Non-existent keys are treated as empty strings, so the command will return zero. By default, the additional arguments *start* and *end* specify a byte index. We can use an additional argument `BIT` to specify a bit index. So 0 is the first bit, 1 is the second bit, and so forth. For negative values, -1 is the last bit, -2 is the penultimate, and so forth. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) The number of bits set to 1. Examples -------- ``` SET mykey "foobar" BITCOUNT mykey BITCOUNT mykey 0 0 BITCOUNT mykey 1 1 BITCOUNT mykey 1 1 BYTE BITCOUNT mykey 5 30 BIT ``` Pattern: real-time metrics using bitmaps ---------------------------------------- Bitmaps are a very space-efficient representation of certain kinds of information. One example is a Web application that needs the history of user visits, so that for instance it is possible to determine what users are good targets of beta features. Using the [`SETBIT`](../setbit) command this is trivial to accomplish, identifying every day with a small progressive integer. For instance day 0 is the first day the application was put online, day 1 the next day, and so forth. Every time a user performs a page view, the application can register that in the current day the user visited the web site using the [`SETBIT`](../setbit) command setting the bit corresponding to the current day. Later it will be trivial to know the number of single days the user visited the web site simply calling the `BITCOUNT` command against the bitmap. A similar pattern where user IDs are used instead of days is described in the article called "[Fast easy realtime metrics using Redis bitmaps](http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps)". Performance considerations -------------------------- In the above example of counting days, even after 10 years the application is online we still have just `365*10` bits of data per user, that is just 456 bytes per user. With this amount of data `BITCOUNT` is still as fast as any other O(1) Redis command like [`GET`](../get) or [`INCR`](../incr). When the bitmap is big, there are two alternatives: * Taking a separated key that is incremented every time the bitmap is modified. This can be very efficient and atomic using a small Redis Lua script. * Running the bitmap incrementally using the `BITCOUNT` *start* and *end* optional parameters, accumulating the results client-side, and optionally caching the result into a key. History ------- * Starting with Redis version 7.0.0: Added the `BYTE|BIT` option. redis MEMORY MEMORY ====== ``` MEMORY USAGE ``` Syntax ``` MEMORY USAGE key [SAMPLES count] ``` Available since: 4.0.0 Time complexity: O(N) where N is the number of samples. ACL categories: `@read`, `@slow`, The `MEMORY USAGE` command reports the number of bytes that a key and its value require to be stored in RAM. The reported usage is the total of memory allocations for data and administrative overheads that a key its value require. For nested data types, the optional `SAMPLES` option can be provided, where `count` is the number of sampled nested values. The samples are averaged to estimate the total size. By default, this option is set to `5`. To sample the all of the nested values, use `SAMPLES 0`. Examples -------- With Redis v4.0.1 64-bit and **jemalloc**, the empty string measures as follows: ``` > SET "" "" OK > MEMORY USAGE "" (integer) 51 ``` These bytes are pure overhead at the moment as no actual data is stored, and are used for maintaining the internal data structures of the server. Longer keys and values show asymptotically linear usage. ``` > SET foo bar OK > MEMORY USAGE foo (integer) 54 > SET cento 01234567890123456789012345678901234567890123 45678901234567890123456789012345678901234567890123456789 OK 127.0.0.1:6379> MEMORY USAGE cento (integer) 153 ``` Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the memory usage in bytes, or `nil` when the key does not exist. redis XGROUP XGROUP ====== ``` XGROUP DELCONSUMER ``` Syntax ``` XGROUP DELCONSUMER key group consumer ``` Available since: 5.0.0 Time complexity: O(1) ACL categories: `@write`, `@stream`, `@slow`, The `XGROUP DELCONSUMER` command deletes a consumer from the consumer group. Sometimes it may be useful to remove old consumers since they are no longer used. Note, however, that any pending messages that the consumer had will become unclaimable after it was deleted. It is strongly recommended, therefore, that any pending messages are claimed or acknowledged prior to deleting the consumer from the group. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of pending messages that the consumer had before it was deleted redis CLUSTER CLUSTER ======= ``` CLUSTER GETKEYSINSLOT ``` Syntax ``` CLUSTER GETKEYSINSLOT slot count ``` Available since: 3.0.0 Time complexity: O(N) where N is the number of requested keys ACL categories: `@slow`, The command returns an array of keys names stored in the contacted node and hashing to the specified hash slot. The maximum number of keys to return is specified via the `count` argument, so that it is possible for the user of this API to batch-processing keys. The main usage of this command is during rehashing of cluster slots from one node to another. The way the rehashing is performed is exposed in the Redis Cluster specification, or in a more simple to digest form, as an appendix of the [`CLUSTER SETSLOT`](../cluster-setslot) command documentation. ``` > CLUSTER GETKEYSINSLOT 7000 3 1) "key_39015" 2) "key_89793" 3) "key_92937" ``` Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): From 0 to *count* key names in a Redis array reply.
programming_docs
redis FUNCTION FUNCTION ======== ``` FUNCTION DUMP ``` Syntax ``` FUNCTION DUMP ``` Available since: 7.0.0 Time complexity: O(N) where N is the number of functions ACL categories: `@slow`, `@scripting`, Return the serialized payload of loaded libraries. You can restore the serialized payload later with the [`FUNCTION RESTORE`](../function-restore) command. For more information please refer to [Introduction to Redis Functions](https://redis.io/topics/functions-intro). Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the serialized payload Examples -------- The following example shows how to dump loaded libraries using `FUNCTION DUMP` and then it calls [`FUNCTION FLUSH`](../function-flush) deletes all the libraries. Then, it restores the original libraries from the serialized payload with [`FUNCTION RESTORE`](../function-restore). ``` redis> FUNCTION DUMP "\xf6\x05mylib\x03LUA\x00\xc3@D@J\x1aredis.register_function('my@\x0b\x02', @\x06`\x12\x11keys, args) return`\x0c\a[1] end)\n\x00@\n)\x11\xc8|\x9b\xe4" redis> FUNCTION FLUSH OK redis> FUNCTION RESTORE "\xf6\x05mylib\x03LUA\x00\xc3@D@J\x1aredis.register_function('my@\x0b\x02', @\x06`\x12\x11keys, args) return`\x0c\a[1] end)\n\x00@\n)\x11\xc8|\x9b\xe4" OK redis> FUNCTION LIST 1) 1) "library_name" 2) "mylib" 3) "engine" 4) "LUA" 5) "description" 6) (nil) 7) "functions" 8) 1) 1) "name" 2) "myfunc" 3) "description" 4) (nil) ``` redis GRAPH.PROFILE GRAPH.PROFILE ============= ``` GRAPH.PROFILE ``` Syntax ``` GRAPH.PROFILE graph query [TIMEOUT timeout] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Graph 2.0.0](https://redis.io/docs/stack/graph) Time complexity: Executes a query and produces an execution plan augmented with metrics for each operation's execution. Arguments: `Graph name, Query` Returns: `String representation of a query execution plan, with details on results produced by and time spent in each operation.` `GRAPH.PROFILE` is a parallel entrypoint to [`GRAPH.QUERY`](../graph.query). It accepts and executes the same queries, but it will not emit results, instead returning the operation tree structure alongside the number of records produced and total runtime of each operation. It is important to note that this blends elements of [GRAPH.QUERY](../graph.query) and [GRAPH.EXPLAIN](../graph.explain). It is not a dry run and will perform all graph modifications expected of the query, but will not output results produced by a `RETURN` clause or query statistics. ``` GRAPH.PROFILE imdb "MATCH (actor\_a:Actor)-[:ACT]->(:Movie)<-[:ACT]-(actor\_b:Actor) WHERE actor\_a <> actor\_b CREATE (actor\_a)-[:COSTARRED\_WITH]->(actor\_b)" 1) "Create | Records produced: 11208, Execution time: 168.208661 ms" 2) " Filter | Records produced: 11208, Execution time: 1.250565 ms" 3) " Conditional Traverse | Records produced: 12506, Execution time: 7.705860 ms" 4) " Node By Label Scan | (actor\_a:Actor) | Records produced: 1317, Execution time: 0.104346 ms" ``` redis ASKING ASKING ====== ``` ASKING ``` Syntax ``` ASKING ``` Available since: 3.0.0 Time complexity: O(1) ACL categories: `@fast`, `@connection`, When a cluster client receives an `-ASK` redirect, the `ASKING` command is sent to the target node followed by the command which was redirected. This is normally done automatically by cluster clients. If an `-ASK` redirect is received during a transaction, only one ASKING command needs to be sent to the target node before sending the complete transaction to the target node. See [ASK redirection in the Redis Cluster Specification](https://redis.io/topics/cluster-spec#ask-redirection) for details. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK`. redis BZPOPMIN BZPOPMIN ======== ``` BZPOPMIN ``` Syntax ``` BZPOPMIN key [key ...] timeout ``` Available since: 5.0.0 Time complexity: O(log(N)) with N being the number of elements in the sorted set. ACL categories: `@write`, `@sortedset`, `@fast`, `@blocking`, `BZPOPMIN` is the blocking variant of the sorted set [`ZPOPMIN`](../zpopmin) primitive. It is the blocking version because it blocks the connection when there are no members to pop from any of the given sorted sets. A member with the lowest score is popped from first sorted set that is non-empty, with the given keys being checked in the order that they are given. The `timeout` argument is interpreted as a double value specifying the maximum number of seconds to block. A timeout of zero can be used to block indefinitely. See the [BLPOP documentation](../blpop) for the exact semantics, since `BZPOPMIN` is identical to [`BLPOP`](../blpop) with the only difference being the data structure being popped from. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): specifically: * A `nil` multi-bulk when no element could be popped and the timeout expired. * A three-element multi-bulk with the first element being the name of the key where a member was popped, the second element is the popped member itself, and the third element is the score of the popped element. Examples -------- ``` redis> DEL zset1 zset2 (integer) 0 redis> ZADD zset1 0 a 1 b 2 c (integer) 3 redis> BZPOPMIN zset1 zset2 0 1) "zset1" 2) "a" 3) "0" ``` History ------- * Starting with Redis version 6.0.0: `timeout` is interpreted as a double instead of an integer. redis JSON.TYPE JSON.TYPE ========= ``` JSON.TYPE ``` Syntax ``` JSON.TYPE key [path] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: O(1) when path is evaluated to a single value, O(N) when path is evaluated to multiple values, where N is the size of the key Report the type of JSON value at `path` [Examples](#examples) Required arguments ------------------ `key` is key to parse. Optional arguments ------------------ `path` is JSONPath to specify. Default is root `$`. Returns null if the `key` or `path` do not exist. Return ------ JSON.TYPE returns an array of string replies for each path, specified as the value's type. For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Examples -------- ``` 127.0.0.1:6379> JSON.SET doc $ '{"a":2, "nested": {"a": true}, "foo": "bar"}' OK 127.0.0.1:6379> JSON.TYPE doc $..foo 1) "string" 127.0.0.1:6379> JSON.TYPE doc $..a 1) "integer" 2) "boolean" 127.0.0.1:6379> JSON.TYPE doc $..dummy ``` See also -------- [`JSON.SET`](../json.set) | [`JSON.ARRLEN`](../json.arrlen) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json) redis CF.SCANDUMP CF.SCANDUMP =========== ``` CF.SCANDUMP ``` Syntax ``` CF.SCANDUMP key iterator ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(n), where n is the capacity Begins an incremental save of the cuckoo filter. This is useful for large cuckoo filters which cannot fit into the normal [`DUMP`](../dump) and [`RESTORE`](../restore) model. The first time this command is called, the value of `iter` should be 0. This command returns successive `(iter, data)` pairs until `(0, NULL)` indicates completion. ### Parameters * **key**: Name of the filter * **iter**: Iterator value. This is either 0, or the iterator from a previous invocation of this command Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) (*Iterator*) and [] (*Data*). The Iterator is passed as input to the next invocation of `SCANDUMP`. If *Iterator* is 0, then it means iteration has completed. The iterator-data pair should also be passed to `LOADCHUNK` when restoring the filter. @exmaples ``` redis> CF.RESERVE cf 8 OK redis> CF.ADD cf item1 (integer) 1 redis> CF.SCANDUMP cf 0 1) (integer) 1 2) "\x01\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x14\x00\x01\x008\x9a\xe0\xd8\xc3\x7f\x00\x00" redis> CF.SCANDUMP cf 1 1) (integer) 9 2) "\x00\x00\x00\x00\a\x00\x00\x00" redis> CF.SCANDUMP cf 9 1) (integer) 0 2) (nil) redis> FLUSHALL OK redis> CF.LOADCHUNK cf 1 "\x01\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x14\x00\x01\x008\x9a\xe0\xd8\xc3\x7f\x00\x00" OK redis> CF.LOADCHUNK cf 9 "\x00\x00\x00\x00\a\x00\x00\x00" OK redis> CF.EXISTS cf item1 (integer) 1 ``` python code: ``` chunks = [] iter = 0 while True: iter, data = CF.SCANDUMP(key, iter) if iter == 0: break else: chunks.append([iter, data]) # Load it back for chunk in chunks: iter, data = chunk CF.LOADCHUNK(key, iter, data) ``` redis CLUSTER CLUSTER ======= ``` CLUSTER REPLICAS ``` Syntax ``` CLUSTER REPLICAS node-id ``` Available since: 5.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, The command provides a list of replica nodes replicating from the specified master node. The list is provided in the same format used by [`CLUSTER NODES`](../cluster-nodes) (please refer to its documentation for the specification of the format). The command will fail if the specified node is not known or if it is not a master according to the node table of the node receiving the command. Note that if a replica is added, moved, or removed from a given master node, and we ask `CLUSTER REPLICAS` to a node that has not yet received the configuration update, it may show stale information. However eventually (in a matter of seconds if there are no network partitions) all the nodes will agree about the set of nodes associated with a given master. Return ------ The command returns data in the same format as [`CLUSTER NODES`](../cluster-nodes). redis GRAPH.CONFIG GRAPH.CONFIG ============ ``` GRAPH.CONFIG SET ``` Syntax ``` GRAPH.CONFIG SET name value ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Graph 2.2.11](https://redis.io/docs/stack/graph) Time complexity: Set the value of a RedisGraph configuration parameter. Values set using `GRAPH.CONFIG SET` are not persisted after server restart. RedisGraph configuration parameters are detailed [here](https://redis.io/docs/stack/graph/configuration). Note: As detailed in the link above, not all RedisGraph configuration parameters can be set at run-time. ``` 127.0.0.1:6379> graph.config get TIMEOUT 1) "TIMEOUT" 2) (integer) 0 127.0.0.1:6379> graph.config set TIMEOUT 10000 OK 127.0.0.1:6379> graph.config get TIMEOUT 1) "TIMEOUT" 2) (integer) 10000 ``` ``` 127.0.0.1:6379> graph.config set THREAD_COUNT 10 (error) This configuration parameter cannot be set at run-time ``` redis FUNCTION FUNCTION ======== ``` FUNCTION FLUSH ``` Syntax ``` FUNCTION FLUSH [ASYNC | SYNC] ``` Available since: 7.0.0 Time complexity: O(N) where N is the number of functions deleted ACL categories: `@write`, `@slow`, `@scripting`, Deletes all the libraries. Unless called with the optional mode argument, the `lazyfree-lazy-user-flush` configuration directive sets the effective behavior. Valid modes are: * `ASYNC`: Asynchronously flush the libraries. * `SYNC`: Synchronously flush the libraries. For more information please refer to [Introduction to Redis Functions](https://redis.io/topics/functions-intro). Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) redis FT.PROFILE FT.PROFILE ========== ``` FT.PROFILE ``` Syntax ``` FT.PROFILE index SEARCH | AGGREGATE [LIMITED] QUERY query ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 2.2.0](https://redis.io/docs/stack/search) Time complexity: O(N) Apply [`FT.SEARCH`](../ft.search) or [`FT.AGGREGATE`](../ft.aggregate) command to collect performance details [Examples](#examples) Required arguments ------------------ `index` is index name, created using [`FT.CREATE`](../ft.create). `SEARCH | AGGREGATE` is difference between [`FT.SEARCH`](../ft.search) and [`FT.AGGREGATE`](../ft.aggregate). `LIMITED` removes details of `reader` iterator. `QUERY {query}` is query string, sent to [`FT.SEARCH`](../ft.search). **Note:** To reduce the size of the output, use `NOCONTENT` or `LIMIT 0 0` to reduce the reply results or `LIMITED` to not reply with details of `reader iterators` inside built-in unions such as `fuzzy` or `prefix`. Return ------ `FT.PROFILE` returns an array reply, with the first array reply identical to the reply of [`FT.SEARCH`](../ft.search) and [`FT.AGGREGATE`](../ft.aggregate) and a second array reply with information of time in milliseconds (ms) used to create the query and time and count of calls of iterators and result-processors. Return value has an array with two elements: * Results - The normal reply from RediSearch, similar to a cursor. * Profile - The details in the profile are: + Total profile time - The total runtime of the query, in ms. + Parsing time - Parsing time of the query and parameters into an execution plan, in ms. + Pipeline creation time - Creation time of execution plan including iterators, result processors, and reducers creation, in ms. + Iterators profile - Index iterators information including their type, term, count, and time data. Inverted-index iterators have in addition the number of elements they contain. Hybrid vector iterators returning the top results from the vector index in batches, include the number of batches. + Result processors profile - Result processors chain with type, count, and time data. Examples -------- **Collect performance information about an index** ``` 127.0.0.1:6379> FT.PROFILE idx SEARCH QUERY "hello world" 1) 1) (integer) 1 2) "doc1" 3) 1) "t" 2) "hello world" 2) 1) 1) Total profile time 2) "0.47199999999999998" 2) 1) Parsing time 2) "0.218" 3) 1) Pipeline creation time 2) "0.032000000000000001" 4) 1) Iterators profile 2) 1) Type 2) INTERSECT 3) Time 4) "0.025000000000000001" 5) Counter 6) (integer) 1 7) Child iterators 8) 1) Type 2) TEXT 3) Term 4) hello 5) Time 6) "0.0070000000000000001" 7) Counter 8) (integer) 1 9) Size 10) (integer) 1 9) 1) Type 2) TEXT 3) Term 4) world 5) Time 6) "0.0030000000000000001" 7) Counter 8) (integer) 1 9) Size 10) (integer) 1 5) 1) Result processors profile 2) 1) Type 2) Index 3) Time 4) "0.036999999999999998" 5) Counter 6) (integer) 1 3) 1) Type 2) Scorer 3) Time 4) "0.025000000000000001" 5) Counter 6) (integer) 1 4) 1) Type 2) Sorter 3) Time 4) "0.013999999999999999" 5) Counter 6) (integer) 1 5) 1) Type 2) Loader 3) Time 4) "0.10299999999999999" 5) Counter 6) (integer) 1 ``` See also -------- [`FT.SEARCH`](../ft.search) | [`FT.AGGREGATE`](../ft.aggregate) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) redis ZINTER ZINTER ====== ``` ZINTER ``` Syntax ``` ZINTER numkeys key [key ...] [WEIGHTS weight [weight ...]] [AGGREGATE <SUM | MIN | MAX>] [WITHSCORES] ``` Available since: 6.2.0 Time complexity: O(N\*K)+O(M\*log(M)) worst case with N being the smallest input sorted set, K being the number of input sorted sets and M being the number of elements in the resulting sorted set. ACL categories: `@read`, `@sortedset`, `@slow`, This command is similar to [`ZINTERSTORE`](../zinterstore), but instead of storing the resulting sorted set, it is returned to the client. For a description of the `WEIGHTS` and `AGGREGATE` options, see [`ZUNIONSTORE`](../zunionstore). Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): the result of intersection (optionally with their scores, in case the `WITHSCORES` option is given). Examples -------- ``` ZADD zset1 1 "one" ZADD zset1 2 "two" ZADD zset2 1 "one" ZADD zset2 2 "two" ZADD zset2 3 "three" ZINTER 2 zset1 zset2 ZINTER 2 zset1 zset2 WITHSCORES ``` redis WATCH WATCH ===== ``` WATCH ``` Syntax ``` WATCH key [key ...] ``` Available since: 2.2.0 Time complexity: O(1) for every key. ACL categories: `@fast`, `@transaction`, Marks the given keys to be watched for conditional execution of a [transaction](https://redis.io/topics/transactions). Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): always `OK`. redis ZSCORE ZSCORE ====== ``` ZSCORE ``` Syntax ``` ZSCORE key member ``` Available since: 1.2.0 Time complexity: O(1) ACL categories: `@read`, `@sortedset`, `@fast`, Returns the score of `member` in the sorted set at `key`. If `member` does not exist in the sorted set, or `key` does not exist, `nil` is returned. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the score of `member` (a double precision floating point number), represented as string. Examples -------- ``` ZADD myzset 1 "one" ZSCORE myzset "one" ``` redis XDEL XDEL ==== ``` XDEL ``` Syntax ``` XDEL key id [id ...] ``` Available since: 5.0.0 Time complexity: O(1) for each single item to delete in the stream, regardless of the stream size. ACL categories: `@write`, `@stream`, `@fast`, Removes the specified entries from a stream, and returns the number of entries deleted. This number may be less than the number of IDs passed to the command in the case where some of the specified IDs do not exist in the stream. Normally you may think at a Redis stream as an append-only data structure, however Redis streams are represented in memory, so we are also able to delete entries. This may be useful, for instance, in order to comply with certain privacy policies. Understanding the low level details of entries deletion ------------------------------------------------------- Redis streams are represented in a way that makes them memory efficient: a radix tree is used in order to index macro-nodes that pack linearly tens of stream entries. Normally what happens when you delete an entry from a stream is that the entry is not *really* evicted, it just gets marked as deleted. Eventually if all the entries in a macro-node are marked as deleted, the whole node is destroyed and the memory reclaimed. This means that if you delete a large amount of entries from a stream, for instance more than 50% of the entries appended to the stream, the memory usage per entry may increment, since what happens is that the stream will become fragmented. However the stream performance will remain the same. In future versions of Redis it is possible that we'll trigger a node garbage collection in case a given macro-node reaches a given amount of deleted entries. Currently with the usage we anticipate for this data structure, it is not a good idea to add such complexity. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of entries actually deleted. Examples -------- ``` > XADD mystream * a 1 1538561698944-0 > XADD mystream * b 2 1538561700640-0 > XADD mystream * c 3 1538561701744-0 > XDEL mystream 1538561700640-0 (integer) 1 127.0.0.1:6379> XRANGE mystream - + 1) 1) 1538561698944-0 2) 1) "a" 2) "1" 2) 1) 1538561701744-0 2) 1) "c" 2) "3" ``` redis JSON.TOGGLE JSON.TOGGLE =========== ``` JSON.TOGGLE ``` Syntax ``` JSON.TOGGLE key path ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 2.0.0](https://redis.io/docs/stack/json) Time complexity: O(1) when path is evaluated to a single value, O(N) when path is evaluated to multiple values, where N is the size of the key Toggle a Boolean value stored at `path` [Examples](#examples) Required arguments ------------------ `key` is key to modify. Optional arguments ------------------ `path` is JSONPath to specify. Default is root `$`. Return ------ JSON.TOGGLE returns an array of integer replies for each path, the new value (`0` if `false` or `1` if `true`), or `nil` for JSON values matching the path that are not Boolean. For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Examples -------- **Toogle a Boolean value stored at `path`** Create a JSON document. ``` 127.0.0.1:6379> JSON.SET doc $ '{"bool": true}' OK ``` Toggle the Boolean value. ``` 127.0.0.1:6379> JSON.TOGGLE doc $.bool 1) (integer) 0 ``` Get the updated document. ``` 127.0.0.1:6379> JSON.GET doc $ "[{\"bool\":false}]" ``` Toggle the Boolean value. ``` 127.0.0.1:6379> JSON.TOGGLE doc $.bool 1) (integer) 1 ``` Get the updated document. ``` 127.0.0.1:6379> JSON.GET doc $ "[{\"bool\":true}]" ``` See also -------- [`JSON.SET`](../json.set) | [`JSON.GET`](../json.get) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json)
programming_docs
redis GEOSEARCHSTORE GEOSEARCHSTORE ============== ``` GEOSEARCHSTORE ``` Syntax ``` GEOSEARCHSTORE destination source <FROMMEMBER member | FROMLONLAT longitude latitude> <BYRADIUS radius <M | KM | FT | MI> | BYBOX width height <M | KM | FT | MI>> [ASC | DESC] [COUNT count [ANY]] [STOREDIST] ``` Available since: 6.2.0 Time complexity: O(N+log(M)) where N is the number of elements in the grid-aligned bounding box area around the shape provided as the filter and M is the number of items inside the shape ACL categories: `@write`, `@geo`, `@slow`, This command is like [`GEOSEARCH`](../geosearch), but stores the result in destination key. This command replaces the now deprecated [`GEORADIUS`](../georadius) and [`GEORADIUSBYMEMBER`](../georadiusbymember). By default, it stores the results in the `destination` sorted set with their geospatial information. When using the `STOREDIST` option, the command stores the items in a sorted set populated with their distance from the center of the circle or box, as a floating-point number, in the same unit specified for that shape. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of elements in the resulting set. Examples -------- ``` GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" GEOADD Sicily 12.758489 38.788135 "edge1" 17.241510 38.788135 "edge2" GEOSEARCHSTORE key1 Sicily FROMLONLAT 15 37 BYBOX 400 400 km ASC COUNT 3 GEOSEARCH key1 FROMLONLAT 15 37 BYBOX 400 400 km ASC WITHCOORD WITHDIST WITHHASH GEOSEARCHSTORE key2 Sicily FROMLONLAT 15 37 BYBOX 400 400 km ASC COUNT 3 STOREDIST ZRANGE key2 0 -1 WITHSCORES ``` History ------- * Starting with Redis version 7.0.0: Added support for uppercase unit names. redis ZPOPMAX ZPOPMAX ======= ``` ZPOPMAX ``` Syntax ``` ZPOPMAX key [count] ``` Available since: 5.0.0 Time complexity: O(log(N)\*M) with N being the number of elements in the sorted set, and M being the number of elements popped. ACL categories: `@write`, `@sortedset`, `@fast`, Removes and returns up to `count` members with the highest scores in the sorted set stored at `key`. When left unspecified, the default value for `count` is 1. Specifying a `count` value that is higher than the sorted set's cardinality will not produce an error. When returning multiple elements, the one with the highest score will be the first, followed by the elements with lower scores. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of popped elements and scores. Examples -------- ``` ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZPOPMAX myzset ``` redis TOPK.RESERVE TOPK.RESERVE ============ ``` TOPK.RESERVE ``` Syntax ``` TOPK.RESERVE key topk [width depth decay] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(1) Initializes a TopK with specified parameters. ### Parameters * **key**: Key under which the sketch is to be found. * **topk**: Number of top occurring items to keep. Optional parameters * **width**: Number of counters kept in each array. (Default 8) * **depth**: Number of arrays. (Default 7) * **decay**: The probability of reducing a counter in an occupied bucket. It is raised to power of it's counter (decay ^ bucket[i].counter). Therefore, as the counter gets higher, the chance of a reduction is being reduced. (Default 0.9) Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) - `OK` if executed correctly, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) otherwise. Examples -------- ``` redis> TOPK.RESERVE topk 50 2000 7 0.925 OK ``` redis TDIGEST.MIN TDIGEST.MIN =========== ``` TDIGEST.MIN ``` Syntax ``` TDIGEST.MIN key ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.4.0](https://redis.io/docs/stack/bloom) Time complexity: O(1) Returns the minimum observation value from a t-digest sketch. Required arguments ------------------ `key` is key name for an existing t-digest sketch. Return value ------------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) of minimum observation value from a sketch. The result is always accurate. 'nan' if the sketch is empty. Examples -------- ``` redis> TDIGEST.CREATE t OK redis> TDIGEST.MIN t "nan" redis> TDIGEST.ADD t 3 4 1 2 5 OK redis> TDIGEST.MIN t "1" ``` redis KEYS KEYS ==== ``` KEYS ``` Syntax ``` KEYS pattern ``` Available since: 1.0.0 Time complexity: O(N) with N being the number of keys in the database, under the assumption that the key names in the database and the given pattern have limited length. ACL categories: `@keyspace`, `@read`, `@slow`, `@dangerous`, Returns all keys matching `pattern`. While the time complexity for this operation is O(N), the constant times are fairly low. For example, Redis running on an entry level laptop can scan a 1 million key database in 40 milliseconds. **Warning**: consider `KEYS` as a command that should only be used in production environments with extreme care. It may ruin performance when it is executed against large databases. This command is intended for debugging and special operations, such as changing your keyspace layout. Don't use `KEYS` in your regular application code. If you're looking for a way to find keys in a subset of your keyspace, consider using [`SCAN`](../scan) or [sets](https://redis.io/topics/data-types#sets). Supported glob-style patterns: * `h?llo` matches `hello`, `hallo` and `hxllo` * `h*llo` matches `hllo` and `heeeello` * `h[ae]llo` matches `hello` and `hallo,` but not `hillo` * `h[^e]llo` matches `hallo`, `hbllo`, ... but not `hello` * `h[a-b]llo` matches `hallo` and `hbllo` Use `\` to escape special characters if you want to match them verbatim. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of keys matching `pattern`. Examples -------- ``` MSET firstname Jack lastname Stuntman age 35 KEYS *name* KEYS a?? KEYS * ``` redis COPY COPY ==== ``` COPY ``` Syntax ``` COPY source destination [DB destination-db] [REPLACE] ``` Available since: 6.2.0 Time complexity: O(N) worst case for collections, where N is the number of nested items. O(1) for string values. ACL categories: `@keyspace`, `@write`, `@slow`, This command copies the value stored at the `source` key to the `destination` key. By default, the `destination` key is created in the logical database used by the connection. The `DB` option allows specifying an alternative logical database index for the destination key. The command returns an error when the `destination` key already exists. The `REPLACE` option removes the `destination` key before copying the value to it. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: * `1` if `source` was copied. * `0` if `source` was not copied. Examples -------- ``` SET dolly "sheep" COPY dolly clone GET clone ``` redis BLMPOP BLMPOP ====== ``` BLMPOP ``` Syntax ``` BLMPOP timeout numkeys key [key ...] <LEFT | RIGHT> [COUNT count] ``` Available since: 7.0.0 Time complexity: O(N+M) where N is the number of provided keys and M is the number of elements returned. ACL categories: `@write`, `@list`, `@slow`, `@blocking`, `BLMPOP` is the blocking variant of [`LMPOP`](../lmpop). When any of the lists contains elements, this command behaves exactly like [`LMPOP`](../lmpop). When used inside a [`MULTI`](../multi)/[`EXEC`](../exec) block, this command behaves exactly like [`LMPOP`](../lmpop). When all lists are empty, Redis will block the connection until another client pushes to it or until the `timeout` (a double value specifying the maximum number of seconds to block) elapses. A `timeout` of zero can be used to block indefinitely. See [`LMPOP`](../lmpop) for more information. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): specifically: * A `nil` when no element could be popped, and timeout is reached. * A two-element array with the first element being the name of the key from which elements were popped, and the second element is an array of elements. redis ZMPOP ZMPOP ===== ``` ZMPOP ``` Syntax ``` ZMPOP numkeys key [key ...] <MIN | MAX> [COUNT count] ``` Available since: 7.0.0 Time complexity: O(K) + O(M\*log(N)) where K is the number of provided keys, N being the number of elements in the sorted set, and M being the number of elements popped. ACL categories: `@write`, `@sortedset`, `@slow`, Pops one or more elements, that are member-score pairs, from the first non-empty sorted set in the provided list of key names. `ZMPOP` and [`BZMPOP`](../bzmpop) are similar to the following, more limited, commands: * [`ZPOPMIN`](../zpopmin) or [`ZPOPMAX`](../zpopmax) which take only one key, and can return multiple elements. * [`BZPOPMIN`](../bzpopmin) or [`BZPOPMAX`](../bzpopmax) which take multiple keys, but return only one element from just one key. See [`BZMPOP`](../bzmpop) for the blocking variant of this command. When the `MIN` modifier is used, the elements popped are those with the lowest scores from the first non-empty sorted set. The `MAX` modifier causes elements with the highest scores to be popped. The optional `COUNT` can be used to specify the number of elements to pop, and is set to 1 by default. The number of popped elements is the minimum from the sorted set's cardinality and `COUNT`'s value. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): specifically: * A `nil` when no element could be popped. * A two-element array with the first element being the name of the key from which elements were popped, and the second element is an array of the popped elements. Every entry in the elements array is also an array that contains the member and its score. Examples -------- ``` ZMPOP 1 notsuchkey MIN ZADD myzset 1 "one" 2 "two" 3 "three" ZMPOP 1 myzset MIN ZRANGE myzset 0 -1 WITHSCORES ZMPOP 1 myzset MAX COUNT 10 ZADD myzset2 4 "four" 5 "five" 6 "six" ZMPOP 2 myzset myzset2 MIN COUNT 10 ZRANGE myzset 0 -1 WITHSCORES ZMPOP 2 myzset myzset2 MAX COUNT 10 ZRANGE myzset2 0 -1 WITHSCORES EXISTS myzset myzset2 ``` redis CLIENT CLIENT ====== ``` CLIENT ID ``` Syntax ``` CLIENT ID ``` Available since: 5.0.0 Time complexity: O(1) ACL categories: `@slow`, `@connection`, The command just returns the ID of the current connection. Every connection ID has certain guarantees: 1. It is never repeated, so if `CLIENT ID` returns the same number, the caller can be sure that the underlying client did not disconnect and reconnect the connection, but it is still the same connection. 2. The ID is monotonically incremental. If the ID of a connection is greater than the ID of another connection, it is guaranteed that the second connection was established with the server at a later time. This command is especially useful together with [`CLIENT UNBLOCK`](../client-unblock) which was introduced also in Redis 5 together with `CLIENT ID`. Check the [`CLIENT UNBLOCK`](../client-unblock) command page for a pattern involving the two commands. Examples -------- ``` CLIENT ID ``` Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) The id of the client. redis ZMSCORE ZMSCORE ======= ``` ZMSCORE ``` Syntax ``` ZMSCORE key member [member ...] ``` Available since: 6.2.0 Time complexity: O(N) where N is the number of members being requested. ACL categories: `@read`, `@sortedset`, `@fast`, Returns the scores associated with the specified `members` in the sorted set stored at `key`. For every `member` that does not exist in the sorted set, a `nil` value is returned. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of scores or `nil` associated with the specified `member` values (a double precision floating point number), represented as strings. Examples -------- ``` ZADD myzset 1 "one" ZADD myzset 2 "two" ZMSCORE myzset "one" "two" "nofield" ``` redis FT.AGGREGATE FT.AGGREGATE ============ ``` FT.AGGREGATE ``` Syntax ``` FT.AGGREGATE index query [VERBATIM] [ LOAD count field [field ...]] [TIMEOUT timeout] [LOAD *] [ GROUPBY nargs property [property ...] [ REDUCE function nargs arg [arg ...] [AS name] [ REDUCE function nargs arg [arg ...] [AS name] ...]] [ GROUPBY nargs property [property ...] [ REDUCE function nargs arg [arg ...] [AS name] [ REDUCE function nargs arg [arg ...] [AS name] ...]] ...]] [ SORTBY nargs [ property ASC | DESC [ property ASC | DESC ...]] [MAX num]] [ APPLY expression AS name [ APPLY expression AS name ...]] [ LIMIT offset num] [FILTER filter] [ WITHCURSOR [COUNT read_size] [MAXIDLE idle_time]] [ PARAMS nargs name value [ name value ...]] [DIALECT dialect] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.1.0](https://redis.io/docs/stack/search) Time complexity: O(1) Run a search query on an index, and perform aggregate transformations on the results, extracting statistics etc from them [Examples](#examples) Required arguments ------------------ `index` is index name against which the query is executed. You must first create the index using [`FT.CREATE`](../ft.create). `query` is base filtering query that retrieves the documents. It follows the exact same syntax as the search query, including filters, unions, not, optional, and so on. Optional arguments ------------------ `VERBATIM` if set, does not try to use stemming for query expansion but searches the query terms verbatim. `LOAD {nargs} {identifier} AS {property} …` loads document attributes from the source document. * `identifier` is either an attribute name for hashes and JSON or a JSON Path expression for JSON. * `property` is the optional name used in the result. If it is not provided, the `identifier` is used. This should be avoided. * If `*` is used as `nargs`, all attributes in a document are loaded. Attributes needed for aggregations should be stored as `SORTABLE`, where they are available to the aggregation pipeline with very low latency. `LOAD` hurts the performance of aggregate queries considerably because every processed record needs to execute the equivalent of [`HMGET`](../hmget) against a Redis key, which when executed over millions of keys, amounts to high processing times. `GROUPBY {nargs} {property}` groups the results in the pipeline based on one or more properties. Each group should have at least one *reducer*, a function that handles the group entries, either counting them, or performing multiple aggregate operations (see below). `REDUCE {func} {nargs} {arg} … [AS {name}]` reduces the matching results in each group into a single record, using a reduction function. For example, `COUNT` counts the number of records in the group. The reducers can have their own property names using the `AS {name}` optional argument. If a name is not given, the resulting name will be the name of the reduce function and the group properties. For example, if a name is not given to `COUNT_DISTINCT` by property `@foo`, the resulting name will be `count_distinct(@foo)`. See [Supported GROUPBY reducers](https://redis.io/docs/stack/search/reference/aggregations/#supported-groupby-reducers) for more details. `SORTBY {nargs} {property} {ASC|DESC} [MAX {num}]` sorts the pipeline up until the point of `SORTBY`, using a list of properties. * By default, sorting is ascending, but `ASC` or `DESC` can be added for each property. * `nargs` is the number of sorting parameters, including `ASC` and `DESC`, for example, `SORTBY 4 @foo ASC @bar DESC`. * `MAX` is used to optimized sorting, by sorting only for the n-largest elements. Although it is not connected to `LIMIT`, you usually need just `SORTBY … MAX` for common queries. Attributes needed for `SORTBY` should be stored as `SORTABLE` to be available with very low latency. `APPLY {expr} AS {name}` applies a 1-to-1 transformation on one or more properties and either stores the result as a new property down the pipeline or replaces any property using this transformation. `expr` is an expression that can be used to perform arithmetic operations on numeric properties, or functions that can be applied on properties depending on their types (see below), or any combination thereof. For example, `APPLY "sqrt(@foo)/log(@bar) + 5" AS baz` evaluates this expression dynamically for each record in the pipeline and store the result as a new property called `baz`, which can be referenced by further `APPLY`/`SORTBY`/`GROUPBY`/`REDUCE` operations down the pipeline. `LIMIT {offset} {num}` limits the number of results to return just `num` results starting at index `offset` (zero-based). It is much more efficient to use `SORTBY … MAX` if you are interested in just limiting the output of a sort operation. If a key expires during the query, an attempt to `load` the key's value will return a null array. However, limit can be used to limit results without sorting, or for paging the n-largest results as determined by `SORTBY MAX`. For example, getting results 50-100 of the top 100 results is most efficiently expressed as `SORTBY 1 @foo MAX 100 LIMIT 50 50`. Removing the `MAX` from `SORTBY` results in the pipeline sorting *all* the records and then paging over results 50-100. `FILTER {expr}` filters the results using predicate expressions relating to values in each result. They are applied post query and relate to the current state of the pipeline. `WITHCURSOR {COUNT} {read_size} [MAXIDLE {idle_time}]` Scan part of the results with a quicker alternative than `LIMIT`. See [Cursor API](https://redis.io/docs/stack/search/reference/aggregations/#cursor-api) for more details. `TIMEOUT {milliseconds}` if set, overrides the timeout parameter of the module. `PARAMS {nargs} {name} {value}` defines one or more value parameters. Each parameter has a name and a value. You can reference parameters in the `query` by a `$`, followed by the parameter name, for example, `$user`. Each such reference in the search query to a parameter name is substituted by the corresponding parameter value. For example, with parameter definition `PARAMS 4 lon 29.69465 lat 34.95126`, the expression `@loc:[$lon $lat 10 km]` is evaluated to `@loc:[29.69465 34.95126 10 km]`. You cannot reference parameters in the query string where concrete values are not allowed, such as in field names, for example, `@loc`. To use `PARAMS`, set `DIALECT` to `2` or greater than `2`. `DIALECT {dialect_version}` selects the dialect version under which to execute the query. If not specified, the query will execute under the default dialect version set during module initial loading or via [`FT.CONFIG SET`](../ft.config-set) command. Return ------ FT.AGGREGATE returns an array reply where each row is an array reply and represents a single aggregate result. The [integer reply](https://redis.io/docs/reference/protocol-spec/#resp-integers) at position `1` does not represent a valid value. ### Return multiple values See [Return multiple values](../ft.search#return-multiple-values) in [`FT.SEARCH`](../ft.search) The `DIALECT` can be specified as a parameter in the FT.AGGREGATE command. If it is not specified, the `DEFAULT_DIALECT` is used, which can be set using [`FT.CONFIG SET`](../ft.config-set) or by passing it as an argument to the `redisearch` module when it is loaded. For example, with the following document and index: ``` 127.0.0.1:6379> JSON.SET doc:1 $ '[{"arr": [1, 2, 3]}, {"val": "hello"}, {"val": "world"}]' OK 127.0.0.1:6379> FT.CREATE idx ON JSON PREFIX 1 doc: SCHEMA $..arr AS arr NUMERIC $..val AS val TEXT OK ``` Notice the different replies, with and without `DIALECT 3`: ``` 127.0.0.1:6379> FT.AGGREGATE idx \* LOAD 2 arr val 1) (integer) 1 2) 1) "arr" 2) "[1,2,3]" 3) "val" 4) "hello" 127.0.0.1:6379> FT.AGGREGATE idx \* LOAD 2 arr val DIALECT 3 1) (integer) 1 2) 1) "arr" 2) "[[1,2,3]]" 3) "val" 4) "[\"hello\",\"world\"]" ``` Complexity ---------- Non-deterministic. Depends on the query and aggregations performed, but it is usually linear to the number of results returned. Examples -------- **Sort page visits by day** Find visits to the page `about.html`, group them by the day of the visit, count the number of visits, and sort them by day. ``` FT.AGGREGATE idx "@url:\"about.html\"" APPLY "day(@timestamp)" AS day GROUPBY 2 @day @country REDUCE count 0 AS num\_visits SORTBY 4 @day ``` **Find most books ever published** Find most books ever published in a single year. ``` FT.AGGREGATE books-idx \* GROUPBY 1 @published\_year REDUCE COUNT 0 AS num\_published GROUPBY 0 REDUCE MAX 1 @num\_published AS max\_books\_published\_per\_year ``` **Reduce all results** The last example used `GROUPBY 0`. Use `GROUPBY 0` to apply a `REDUCE` function over all results from the last step of an aggregation pipeline -- this works on both the initial query and subsequent `GROUPBY` operations. Search for libraries within 10 kilometers of the longitude -73.982254 and latitude 40.753181 then annotate them with the distance between their location and those coordinates. ``` FT.AGGREGATE libraries-idx "@location:[-73.982254 40.753181 10 km]" LOAD 1 @location APPLY "geodistance(@location, -73.982254, 40.753181)" ``` Here, we needed to use `LOAD` to pre-load the `@location` attribute because it is a GEO attribute. Next, count GitHub events by user (actor), to produce the most active users. ``` 127.0.0.1:6379> FT.AGGREGATE gh "\*" GROUPBY 1 @actor REDUCE COUNT 0 AS num SORTBY 2 @num DESC MAX 10 1) (integer) 284784 2) 1) "actor" 2) "lombiqbot" 3) "num" 4) "22197" 3) 1) "actor" 2) "codepipeline-test" 3) "num" 4) "17746" 4) 1) "actor" 2) "direwolf-github" 3) "num" 4) "10683" 5) 1) "actor" 2) "ogate" 3) "num" 4) "6449" 6) 1) "actor" 2) "openlocalizationtest" 3) "num" 4) "4759" 7) 1) "actor" 2) "digimatic" 3) "num" 4) "3809" 8) 1) "actor" 2) "gugod" 3) "num" 4) "3512" 9) 1) "actor" 2) "xdzou" 3) "num" 4) "3216" [10](10)) 1) "actor" 2) "opstest" 3) "num" 4) "2863" 11) 1) "actor" 2) "jikker" 3) "num" 4) "2794" (0.59s) ``` See also -------- [`FT.CONFIG SET`](../ft.config-set) | [`FT.SEARCH`](../ft.search) Related topics -------------- * [Aggregations](https://redis.io/docs/stack/search/reference/aggregations) * [RediSearch](https://redis.io/docs/stack/search)
programming_docs
redis BF.EXISTS BF.EXISTS ========= ``` BF.EXISTS ``` Syntax ``` BF.EXISTS key item ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(k), where k is the number of hash functions used by the last sub-filter Determines whether an item may exist in the Bloom Filter or not. ### Parameters * **key**: The name of the filter * **item**: The item to check for Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - where "1" value means the item may exist in the filter, and a "0" value means it does not exist in the filter. Examples -------- ``` redis> BF.EXISTS bf item1 (integer) 1 redis> BF.EXISTS bf item_new (integer) 0 ``` redis RENAME RENAME ====== ``` RENAME ``` Syntax ``` RENAME key newkey ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@keyspace`, `@write`, `@slow`, Renames `key` to `newkey`. It returns an error when `key` does not exist. If `newkey` already exists it is overwritten, when this happens `RENAME` executes an implicit [`DEL`](../del) operation, so if the deleted key contains a very big value it may cause high latency even if `RENAME` itself is usually a constant-time operation. In Cluster mode, both `key` and `newkey` must be in the same **hash slot**, meaning that in practice only keys that have the same hash tag can be reliably renamed in cluster. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) Examples -------- ``` SET mykey "Hello" RENAME mykey myotherkey GET myotherkey ``` Behavior change history ----------------------- * `>= 3.2.0`: The command no longer returns an error when source and destination names are the same. redis SCRIPT SCRIPT ====== ``` SCRIPT FLUSH ``` Syntax ``` SCRIPT FLUSH [ASYNC | SYNC] ``` Available since: 2.6.0 Time complexity: O(N) with N being the number of scripts in cache ACL categories: `@slow`, `@scripting`, Flush the Lua scripts cache. By default, `SCRIPT FLUSH` will synchronously flush the cache. Starting with Redis 6.2, setting the **lazyfree-lazy-user-flush** configuration directive to "yes" changes the default flush mode to asynchronous. It is possible to use one of the following modifiers to dictate the flushing mode explicitly: * `ASYNC`: flushes the cache asynchronously * `SYNC`: flushes the cache synchronously For more information about [`EVAL`](../eval) scripts please refer to [Introduction to Eval Scripts](https://redis.io/topics/eval-intro). Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) Behavior change history ----------------------- * `>= 6.2.0`: Default flush behavior now configurable by the **lazyfree-lazy-user-flush** configuration directive. History ------- * Starting with Redis version 6.2.0: Added the `ASYNC` and `SYNC` flushing mode modifiers. redis EXPIRE EXPIRE ====== ``` EXPIRE ``` Syntax ``` EXPIRE key seconds [NX | XX | GT | LT] ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@keyspace`, `@write`, `@fast`, Set a timeout on `key`. After the timeout has expired, the key will automatically be deleted. A key with an associated timeout is often said to be *volatile* in Redis terminology. The timeout will only be cleared by commands that delete or overwrite the contents of the key, including [`DEL`](../del), [`SET`](../set), [`GETSET`](../getset) and all the `*STORE` commands. This means that all the operations that conceptually *alter* the value stored at the key without replacing it with a new one will leave the timeout untouched. For instance, incrementing the value of a key with [`INCR`](../incr), pushing a new value into a list with [`LPUSH`](../lpush), or altering the field value of a hash with [`HSET`](../hset) are all operations that will leave the timeout untouched. The timeout can also be cleared, turning the key back into a persistent key, using the [`PERSIST`](../persist) command. If a key is renamed with [`RENAME`](../rename), the associated time to live is transferred to the new key name. If a key is overwritten by [`RENAME`](../rename), like in the case of an existing key `Key_A` that is overwritten by a call like `RENAME Key_B Key_A`, it does not matter if the original `Key_A` had a timeout associated or not, the new key `Key_A` will inherit all the characteristics of `Key_B`. Note that calling `EXPIRE`/[`PEXPIRE`](../pexpire) with a non-positive timeout or [`EXPIREAT`](../expireat)/[`PEXPIREAT`](../pexpireat) with a time in the past will result in the key being [deleted](../del) rather than expired (accordingly, the emitted [key event](https://redis.io/topics/notifications) will be `del`, not `expired`). Options ------- The `EXPIRE` command supports a set of options: * `NX` -- Set expiry only when the key has no expiry * `XX` -- Set expiry only when the key has an existing expiry * `GT` -- Set expiry only when the new expiry is greater than current one * `LT` -- Set expiry only when the new expiry is less than current one A non-volatile key is treated as an infinite TTL for the purpose of `GT` and `LT`. The `GT`, `LT` and `NX` options are mutually exclusive. Refreshing expires ------------------ It is possible to call `EXPIRE` using as argument a key that already has an existing expire set. In this case the time to live of a key is *updated* to the new value. There are many useful applications for this, an example is documented in the *Navigation session* pattern section below. Differences in Redis prior 2.1.3 -------------------------------- In Redis versions prior **2.1.3** altering a key with an expire set using a command altering its value had the effect of removing the key entirely. This semantics was needed because of limitations in the replication layer that are now fixed. `EXPIRE` would return 0 and not alter the timeout for a key with a timeout set. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: * `1` if the timeout was set. * `0` if the timeout was not set. e.g. key doesn't exist, or operation skipped due to the provided arguments. Examples -------- ``` SET mykey "Hello" EXPIRE mykey 10 TTL mykey SET mykey "Hello World" TTL mykey EXPIRE mykey 10 XX TTL mykey EXPIRE mykey 10 NX TTL mykey ``` Pattern: Navigation session --------------------------- Imagine you have a web service and you are interested in the latest N pages *recently* visited by your users, such that each adjacent page view was not performed more than 60 seconds after the previous. Conceptually you may consider this set of page views as a *Navigation session* of your user, that may contain interesting information about what kind of products he or she is looking for currently, so that you can recommend related products. You can easily model this pattern in Redis using the following strategy: every time the user does a page view you call the following commands: ``` MULTI RPUSH pagewviews.user:<userid> http://..... EXPIRE pagewviews.user:<userid> 60 EXEC ``` If the user will be idle more than 60 seconds, the key will be deleted and only subsequent page views that have less than 60 seconds of difference will be recorded. This pattern is easily modified to use counters using [`INCR`](../incr) instead of lists using [`RPUSH`](../rpush). Appendix: Redis expires ======================= Keys with an expire ------------------- Normally Redis keys are created without an associated time to live. The key will simply live forever, unless it is removed by the user in an explicit way, for instance using the [`DEL`](../del) command. The `EXPIRE` family of commands is able to associate an expire to a given key, at the cost of some additional memory used by the key. When a key has an expire set, Redis will make sure to remove the key when the specified amount of time elapsed. The key time to live can be updated or entirely removed using the `EXPIRE` and [`PERSIST`](../persist) command (or other strictly related commands). Expire accuracy --------------- In Redis 2.4 the expire might not be pin-point accurate, and it could be between zero to one seconds out. Since Redis 2.6 the expire error is from 0 to 1 milliseconds. Expires and persistence ----------------------- Keys expiring information is stored as absolute Unix timestamps (in milliseconds in case of Redis version 2.6 or greater). This means that the time is flowing even when the Redis instance is not active. For expires to work well, the computer time must be taken stable. If you move an RDB file from two computers with a big desync in their clocks, funny things may happen (like all the keys loaded to be expired at loading time). Even running instances will always check the computer clock, so for instance if you set a key with a time to live of 1000 seconds, and then set your computer time 2000 seconds in the future, the key will be expired immediately, instead of lasting for 1000 seconds. How Redis expires keys ---------------------- Redis keys are expired in two ways: a passive way, and an active way. A key is passively expired simply when some client tries to access it, and the key is found to be timed out. Of course this is not enough as there are expired keys that will never be accessed again. These keys should be expired anyway, so periodically Redis tests a few keys at random among keys with an expire set. All the keys that are already expired are deleted from the keyspace. Specifically this is what Redis does 10 times per second: 1. Test 20 random keys from the set of keys with an associated expire. 2. Delete all the keys found expired. 3. If more than 25% of keys were expired, start again from step 1. This is a trivial probabilistic algorithm, basically the assumption is that our sample is representative of the whole key space, and we continue to expire until the percentage of keys that are likely to be expired is under 25% This means that at any given moment the maximum amount of keys already expired that are using memory is at max equal to max amount of write operations per second divided by 4. How expires are handled in the replication link and AOF file ------------------------------------------------------------ In order to obtain a correct behavior without sacrificing consistency, when a key expires, a [`DEL`](../del) operation is synthesized in both the AOF file and gains all the attached replicas nodes. This way the expiration process is centralized in the master instance, and there is no chance of consistency errors. However while the replicas connected to a master will not expire keys independently (but will wait for the [`DEL`](../del) coming from the master), they'll still take the full state of the expires existing in the dataset, so when a replica is elected to master it will be able to expire the keys independently, fully acting as a master. History ------- * Starting with Redis version 7.0.0: Added options: `NX`, `XX`, `GT` and `LT`. redis JSON.OBJKEYS JSON.OBJKEYS ============ ``` JSON.OBJKEYS ``` Syntax ``` JSON.OBJKEYS key [path] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: O(N) when path is evaluated to a single value, where N is the number of keys in the object, O(N) when path is evaluated to multiple values, where N is the size of the key Return the keys in the object that's referenced by `path` [Examples](#examples) Required arguments ------------------ `key` is key to parse. Returns `null` for nonexistent keys. Optional arguments ------------------ `path` is JSONPath to specify. Default is root `$`. Returns `null` for nonexistant path. Return ------ JSON.OBJKEYS returns an array of array replies for each path, an array of the key names in the object as a bulk string reply, or `nil` if the matching JSON value is not an object. For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Examples -------- ``` 127.0.0.1:6379> JSON.SET doc $ '{"a":[3], "nested": {"a": {"b":2, "c": 1}}}' OK 127.0.0.1:6379> JSON.OBJKEYS doc $..a 1) (nil) 2) 1) "b" 2) "c" ``` See also -------- [`JSON.ARRINDEX`](../json.arrindex) | [`JSON.ARRINSERT`](../json.arrinsert) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json) redis MEMORY MEMORY ====== ``` MEMORY MALLOC-STATS ``` Syntax ``` MEMORY MALLOC-STATS ``` Available since: 4.0.0 Time complexity: Depends on how much memory is allocated, could be slow ACL categories: `@slow`, The `MEMORY MALLOC-STATS` command provides an internal statistics report from the memory allocator. This command is currently implemented only when using **jemalloc** as an allocator, and evaluates to a benign NOOP for all others. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the memory allocator's internal statistics report redis HELLO HELLO ===== ``` HELLO ``` Syntax ``` HELLO [protover [AUTH username password] [SETNAME clientname]] ``` Available since: 6.0.0 Time complexity: O(1) ACL categories: `@fast`, `@connection`, Switch to a different protocol, optionally authenticating and setting the connection's name, or provide a contextual client report. Redis version 6 and above supports two protocols: the old protocol, RESP2, and a new one introduced with Redis 6, RESP3. RESP3 has certain advantages since when the connection is in this mode, Redis is able to reply with more semantical replies: for instance, [`HGETALL`](../hgetall) will return a *map type*, so a client library implementation no longer requires to know in advance to translate the array into a hash before returning it to the caller. For a full coverage of RESP3, please check the [RESP3 specification](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md). In Redis 6 connections start in RESP2 mode, so clients implementing RESP2 do not need to updated or changed. There are no short term plans to drop support for RESP2, although future version may default to RESP3. `HELLO` always replies with a list of current server and connection properties, such as: versions, modules loaded, client ID, replication role and so forth. When called without any arguments in Redis 6.2 and its default use of RESP2 protocol, the reply looks like this: ``` > HELLO 1) "server" 2) "redis" 3) "version" 4) "255.255.255" 5) "proto" 6) (integer) 2 7) "id" 8) (integer) 5 9) "mode" 10) "standalone" 11) "role" 12) "master" 13) "modules" 14) (empty array) ``` Clients that want to handshake using the RESP3 mode need to call the `HELLO` command and specify the value "3" as the `protover` argument, like so: ``` > HELLO 3 1# "server" => "redis" 2# "version" => "6.0.0" 3# "proto" => (integer) 3 4# "id" => (integer) 10 5# "mode" => "standalone" 6# "role" => "master" 7# "modules" => (empty array) ``` Because `HELLO` replies with useful information, and given that `protover` is optional or can be set to "2", client library authors may consider using this command instead of the canonical [`PING`](../ping) when setting up the connection. When called with the optional `protover` argument, this command switches the protocol to the specified version and also accepts the following options: * `AUTH <username> <password>`: directly authenticate the connection in addition to switching to the specified protocol version. This makes calling [`AUTH`](../auth) before `HELLO` unnecessary when setting up a new connection. Note that the `username` can be set to "default" to authenticate against a server that does not use ACLs, but rather the simpler `requirepass` mechanism of Redis prior to version 6. * `SETNAME <clientname>`: this is the equivalent of calling [`CLIENT SETNAME`](../client-setname). Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a list of server properties. The reply is a map instead of an array when RESP3 is selected. The command returns an error if the `protover` requested does not exist. History ------- * Starting with Redis version 6.2.0: `protover` made optional; when called without arguments the command reports the current connection's context. redis CMS.INFO CMS.INFO ======== ``` CMS.INFO ``` Syntax ``` CMS.INFO key ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(1) Returns width, depth and total count of the sketch. ### Parameters: * **key**: The name of the sketch. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) with information of the filter. Examples -------- ``` redis> CMS.INFO test 1) width 2) (integer) 2000 3) depth 4) (integer) 7 5) count 6) (integer) 0 ``` redis PEXPIREAT PEXPIREAT ========= ``` PEXPIREAT ``` Syntax ``` PEXPIREAT key unix-time-milliseconds [NX | XX | GT | LT] ``` Available since: 2.6.0 Time complexity: O(1) ACL categories: `@keyspace`, `@write`, `@fast`, `PEXPIREAT` has the same effect and semantic as [`EXPIREAT`](../expireat), but the Unix time at which the key will expire is specified in milliseconds instead of seconds. Options ------- The `PEXPIREAT` command supports a set of options since Redis 7.0: * `NX` -- Set expiry only when the key has no expiry * `XX` -- Set expiry only when the key has an existing expiry * `GT` -- Set expiry only when the new expiry is greater than current one * `LT` -- Set expiry only when the new expiry is less than current one A non-volatile key is treated as an infinite TTL for the purpose of `GT` and `LT`. The `GT`, `LT` and `NX` options are mutually exclusive. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: * `1` if the timeout was set. * `0` if the timeout was not set. e.g. key doesn't exist, or operation skipped due to the provided arguments. Examples -------- ``` SET mykey "Hello" PEXPIREAT mykey 1555555555005 TTL mykey PTTL mykey ``` History ------- * Starting with Redis version 7.0.0: Added options: `NX`, `XX`, `GT` and `LT`. redis XINFO XINFO ===== ``` XINFO CONSUMERS ``` Syntax ``` XINFO CONSUMERS key group ``` Available since: 5.0.0 Time complexity: O(1) ACL categories: `@read`, `@stream`, `@slow`, This command returns the list of consumers that belong to the `<groupname>` consumer group of the stream stored at `<key>`. The following information is provided for each consumer in the group: * **name**: the consumer's name * **pending**: the number of entries in the PEL: pending messages for the consumer, which are messages that were delivered but are yet to be acknowledged * **idle**: the number of milliseconds that have passed since the consumer's last attempted interaction (Examples: [`XREADGROUP`](../xreadgroup), [`XCLAIM`](../xclaim), [`XAUTOCLAIM`](../xautoclaim)) * **inactive**: the number of milliseconds that have passed since the consumer's last successful interaction (Examples: [`XREADGROUP`](../xreadgroup) that actually read some entries into the PEL, [`XCLAIM`](../xclaim)/[`XAUTOCLAIM`](../xautoclaim) that actually claimed some entries) @reply [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a list of consumers. Examples -------- ``` > XINFO CONSUMERS mystream mygroup 1) 1) name 2) "Alice" 3) pending 4) (integer) 1 5) idle 6) (integer) 9104628 7) inactive 8) (integer) 18104698 2) 1) name 2) "Bob" 3) pending 4) (integer) 1 5) idle 6) (integer) 83841983 7) inactive 8) (integer) 993841998 ``` History ------- * Starting with Redis version 7.2.0: Added the `inactive` field.
programming_docs
redis COMMAND COMMAND ======= ``` COMMAND ``` Syntax ``` COMMAND ``` Available since: 2.8.13 Time complexity: O(N) where N is the total number of Redis commands ACL categories: `@slow`, `@connection`, Return an array with details about every Redis command. The `COMMAND` command is introspective. Its reply describes all commands that the server can process. Redis clients can call it to obtain the server's runtime capabilities during the handshake. `COMMAND` also has several subcommands. Please refer to its subcommands for further details. **Cluster note:** this command is especially beneficial for cluster-aware clients. Such clients must identify the names of keys in commands to route requests to the correct shard. Although most commands accept a single key as their first argument, there are many exceptions to this rule. You can call `COMMAND` and then keep the mapping between commands and their respective key specification rules cached in the client. The reply it returns is an array with an element per command. Each element that describes a Redis command is represented as an array by itself. The command's array consists of a fixed number of elements. The exact number of elements in the array depends on the server's version. 1. Name 2. Arity 3. Flags 4. First key 5. Last key 6. Step 7. [ACL categories](https://redis.io/topics/acl) (as of Redis 6.0) 8. [Tips](https://redis.io/topics/command-tips) (as of Redis 7.0) 9. [Key specifications](https://redis.io/topics/key-specs) (as of Redis 7.0) 10. Subcommands (as of Redis 7.0) Name ---- This is the command's name in lowercase. **Note:** Redis command names are case-insensitive. Arity ----- Arity is the number of arguments a command expects. It follows a simple pattern: * A positive integer means a fixed number of arguments. * A negative integer means a minimal number of arguments. Command arity *always includes* the command's name itself (and the subcommand when applicable). Examples: * [`GET`](../get)'s arity is *2* since the command only accepts one argument and always has the format `GET _key_`. * [`MGET`](../mget)'s arity is *-2* since the command accepts at least one argument, but possibly multiple ones: `MGET _key1_ [key2] [key3] ...`. Flags ----- Command flags are an array. It can contain the following simple strings (status reply): * **admin:** the command is an administrative command. * **asking:** the command is allowed even during hash slot migration. This flag is relevant in Redis Cluster deployments. * **blocking:** the command may block the requesting client. * **denyoom**: the command is rejected if the server's memory usage is too high (see the *maxmemory* configuration directive). * **fast:** the command operates in constant or log(N) time. This flag is used for monitoring latency with the [`LATENCY`](../latency) command. * **loading:** the command is allowed while the database is loading. * **movablekeys:** the *first key*, *last key*, and *step* values don't determine all key positions. Clients need to use [`COMMAND GETKEYS`](../command-getkeys) or [key specifications](https://redis.io/topics/key-specs) in this case. See below for more details. * **no\_auth:** executing the command doesn't require authentication. * **no\_async\_loading:** the command is denied during asynchronous loading (that is when a replica uses disk-less `SWAPDB SYNC`, and allows access to the old dataset). * **no\_mandatory\_keys:** the command may accept key name arguments, but these aren't mandatory. * **no\_multi:** the command isn't allowed inside the context of a [transaction](https://redis.io/topics/transactions). * **noscript:** the command can't be called from [scripts](https://redis.io/topics/eval-intro) or [functions](https://redis.io/topics/functions-intro). * **pubsub:** the command is related to [Redis Pub/Sub](https://redis.io/topics/pubsub). * **random**: the command returns random results, which is a concern with verbatim script replication. As of Redis 7.0, this flag is a [command tip](https://redis.io/topics/command-tips). * **readonly:** the command doesn't modify data. * **sort\_for\_script:** the command's output is sorted when called from a script. * **skip\_monitor:** the command is not shown in [`MONITOR`](../monitor)'s output. * **skip\_slowlog:** the command is not shown in [`SLOWLOG`](../slowlog)'s output. As of Redis 7.0, this flag is a [command tip](https://redis.io/topics/command-tips). * **stale:** the command is allowed while a replica has stale data. * **write:** the command may modify data. ### Movablekeys Consider [`SORT`](../sort): ``` 1) 1) "sort" 2) (integer) -2 3) 1) write 2) denyoom 3) movablekeys 4) (integer) 1 5) (integer) 1 6) (integer) 1 ... ``` Some Redis commands have no predetermined key locations or are not easy to find. For those commands, the *movablekeys* flag indicates that the *first key*, *last key*, and *step* values are insufficient to find all the keys. Here are several examples of commands that have the *movablekeys* flag: * [`SORT`](../sort): the optional *STORE*, *BY*, and *GET* modifiers are followed by names of keys. * [`ZUNION`](../zunion): the *numkeys* argument specifies the number key name arguments. * [`MIGRATE`](../migrate): the keys appear *KEYS* keyword and only when the second argument is the empty string. Redis Cluster clients need to use other measures, as follows, to locate the keys for such commands. You can use the [`COMMAND GETKEYS`](../command-getkeys) command and have your Redis server report all keys of a given command's invocation. As of Redis 7.0, clients can use the [key specifications](#key-specifications) to identify the positions of key names. The only commands that require using [`COMMAND GETKEYS`](../command-getkeys) are [`SORT`](../sort) and [`MIGRATE`](../migrate) for clients that parse keys' specifications. For more information, please refer to the [key specifications page](https://redis.io/topics/key-specs). First key --------- The position of the command's first key name argument. For most commands, the first key's position is 1. Position 0 is always the command name itself. Last key -------- The position of the command's last key name argument. Redis commands usually accept one, two or multiple number of keys. Commands that accept a single key have both *first key* and *last key* set to 1. Commands that accept two key name arguments, e.g. [`BRPOPLPUSH`](../brpoplpush), [`SMOVE`](../smove) and [`RENAME`](../rename), have this value set to the position of their second key. Multi-key commands that accept an arbitrary number of keys, such as [`MSET`](../mset), use the value -1. Step ---- The step, or increment, between the *first key* and the position of the next key. Consider the following two examples: ``` 1) 1) "mset" 2) (integer) -3 3) 1) write 2) denyoom 4) (integer) 1 5) (integer) -1 6) (integer) 2 ... ``` ``` 1) 1) "mget" 2) (integer) -2 3) 1) readonly 2) fast 4) (integer) 1 5) (integer) -1 6) (integer) 1 ... ``` The step count allows us to find keys' positions. For example [`MSET`](../mset): Its syntax is `MSET _key1_ _val1_ [key2] [val2] [key3] [val3]...`, so the keys are at every other position (step value of *2*). Unlike [`MGET`](../mget), which uses a step value of *1*. ACL categories -------------- This is an array of simple strings that are the ACL categories to which the command belongs. Please refer to the [Access Control List](https://redis.io/topics/acl) page for more information. Command tips ------------ Helpful information about the command. To be used by clients/proxies. Please check the [Command tips](https://redis.io/topics/command-tips) page for more information. Key specifications ------------------ This is an array consisting of the command's key specifications. Each element in the array is a map describing a method for locating keys in the command's arguments. For more information please check the [key specifications page](https://redis.io/topics/key-specs). Subcommands ----------- This is an array containing all of the command's subcommands, if any. Some Redis commands have subcommands (e.g., the `REWRITE` subcommand of [`CONFIG`](../config)). Each element in the array represents one subcommand and follows the same specifications as those of `COMMAND`'s reply. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a nested list of command details. The order of commands in the array is random. Examples -------- The following is `COMMAND`'s output for the [`GET`](../get) command: ``` 1) 1) "get" 2) (integer) 2 3) 1) readonly 2) fast 4) (integer) 1 5) (integer) 1 6) (integer) 1 7) 1) @read 2) @string 3) @fast 8) (empty array) 9) 1) 1) "flags" 2) 1) read 3) "begin_search" 4) 1) "type" 2) "index" 3) "spec" 4) 1) "index" 2) (integer) 1 5) "find_keys" 6) 1) "type" 2) "range" 3) "spec" 4) 1) "lastkey" 2) (integer) 0 3) "keystep" 4) (integer) 1 5) "limit" 6) (integer) 0 10) (empty array) ... ``` redis ZUNIONSTORE ZUNIONSTORE =========== ``` ZUNIONSTORE ``` Syntax ``` ZUNIONSTORE destination numkeys key [key ...] [WEIGHTS weight [weight ...]] [AGGREGATE <SUM | MIN | MAX>] ``` Available since: 2.0.0 Time complexity: O(N)+O(M log(M)) with N being the sum of the sizes of the input sorted sets, and M being the number of elements in the resulting sorted set. ACL categories: `@write`, `@sortedset`, `@slow`, Computes the union of `numkeys` sorted sets given by the specified keys, and stores the result in `destination`. It is mandatory to provide the number of input keys (`numkeys`) before passing the input keys and the other (optional) arguments. By default, the resulting score of an element is the sum of its scores in the sorted sets where it exists. Using the `WEIGHTS` option, it is possible to specify a multiplication factor for each input sorted set. This means that the score of every element in every input sorted set is multiplied by this factor before being passed to the aggregation function. When `WEIGHTS` is not given, the multiplication factors default to `1`. With the `AGGREGATE` option, it is possible to specify how the results of the union are aggregated. This option defaults to `SUM`, where the score of an element is summed across the inputs where it exists. When this option is set to either `MIN` or `MAX`, the resulting set will contain the minimum or maximum score of an element across the inputs where it exists. If `destination` already exists, it is overwritten. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of elements in the resulting sorted set at `destination`. Examples -------- ``` ZADD zset1 1 "one" ZADD zset1 2 "two" ZADD zset2 1 "one" ZADD zset2 2 "two" ZADD zset2 3 "three" ZUNIONSTORE out 2 zset1 zset2 WEIGHTS 2 3 ZRANGE out 0 -1 WITHSCORES ``` redis CLIENT CLIENT ====== ``` CLIENT LIST ``` Syntax ``` CLIENT LIST [TYPE <NORMAL | MASTER | REPLICA | PUBSUB>] [ID client-id [client-id ...]] ``` Available since: 2.4.0 Time complexity: O(N) where N is the number of client connections ACL categories: `@admin`, `@slow`, `@dangerous`, `@connection`, The `CLIENT LIST` command returns information and statistics about the client connections server in a mostly human readable format. You can use one of the optional subcommands to filter the list. The `TYPE type` subcommand filters the list by clients' type, where *type* is one of `normal`, `master`, `replica`, and `pubsub`. Note that clients blocked by the [`MONITOR`](../monitor) command belong to the `normal` class. The `ID` filter only returns entries for clients with IDs matching the `client-id` arguments. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): a unique string, formatted as follows: * One client connection per line (separated by LF) * Each line is composed of a succession of `property=value` fields separated by a space character. Here is the meaning of the fields: * `id`: a unique 64-bit client ID * `addr`: address/port of the client * `laddr`: address/port of local address client connected to (bind address) * `fd`: file descriptor corresponding to the socket * `name`: the name set by the client with [`CLIENT SETNAME`](../client-setname) * `age`: total duration of the connection in seconds * `idle`: idle time of the connection in seconds * `flags`: client flags (see below) * `db`: current database ID * `sub`: number of channel subscriptions * `psub`: number of pattern matching subscriptions * `ssub`: number of shard channel subscriptions. Added in Redis 7.0.3 * `multi`: number of commands in a MULTI/EXEC context * `qbuf`: query buffer length (0 means no query pending) * `qbuf-free`: free space of the query buffer (0 means the buffer is full) * `argv-mem`: incomplete arguments for the next command (already extracted from query buffer) * `multi-mem`: memory is used up by buffered multi commands. Added in Redis 7.0 * `obl`: output buffer length * `oll`: output list length (replies are queued in this list when the buffer is full) * `omem`: output buffer memory usage * `tot-mem`: total memory consumed by this client in its various buffers * `events`: file descriptor events (see below) * `cmd`: last command played * `user`: the authenticated username of the client * `redir`: client id of current client tracking redirection * `resp`: client RESP protocol version. Added in Redis 7.0 The client flags can be a combination of: ``` A: connection to be closed ASAP b: the client is waiting in a blocking operation c: connection to be closed after writing entire reply d: a watched keys has been modified - EXEC will fail i: the client is waiting for a VM I/O (deprecated) M: the client is a master N: no specific flag set O: the client is a client in MONITOR mode P: the client is a Pub/Sub subscriber r: the client is in readonly mode against a cluster node S: the client is a replica node connection to this instance u: the client is unblocked U: the client is connected via a Unix domain socket x: the client is in a MULTI/EXEC context t: the client enabled keys tracking in order to perform client side caching R: the client tracking target client is invalid B: the client enabled broadcast tracking mode ``` The file descriptor events can be: ``` r: the client socket is readable (event loop) w: the client socket is writable (event loop) ``` Notes ----- New fields are regularly added for debugging purpose. Some could be removed in the future. A version safe Redis client using this command should parse the output accordingly (i.e. handling gracefully missing fields, skipping unknown fields). History ------- * Starting with Redis version 2.8.12: Added unique client `id` field. * Starting with Redis version 5.0.0: Added optional `TYPE` filter. * Starting with Redis version 6.0.0: Added `user` field. * Starting with Redis version 6.2.0: Added `argv-mem`, `tot-mem`, `laddr` and `redir` fields and the optional `ID` filter. * Starting with Redis version 7.0.0: Added `resp`, `multi-mem`, `rbs` and `rbp` fields. * Starting with Redis version 7.0.3: Added `ssub` field. redis HSTRLEN HSTRLEN ======= ``` HSTRLEN ``` Syntax ``` HSTRLEN key field ``` Available since: 3.2.0 Time complexity: O(1) ACL categories: `@read`, `@hash`, `@fast`, Returns the string length of the value associated with `field` in the hash stored at `key`. If the `key` or the `field` do not exist, 0 is returned. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the string length of the value associated with `field`, or zero when `field` is not present in the hash or `key` does not exist at all. Examples -------- ``` HMSET myhash f1 HelloWorld f2 99 f3 -256 HSTRLEN myhash f1 HSTRLEN myhash f2 HSTRLEN myhash f3 ``` redis CLIENT CLIENT ====== ``` CLIENT PAUSE ``` Syntax ``` CLIENT PAUSE timeout [WRITE | ALL] ``` Available since: 3.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, `@connection`, `CLIENT PAUSE` is a connections control command able to suspend all the Redis clients for the specified amount of time (in milliseconds). The command performs the following actions: * It stops processing all the pending commands from normal and pub/sub clients for the given mode. However interactions with replicas will continue normally. Note that clients are formally paused when they try to execute a command, so no work is taken on the server side for inactive clients. * However it returns OK to the caller ASAP, so the `CLIENT PAUSE` command execution is not paused by itself. * When the specified amount of time has elapsed, all the clients are unblocked: this will trigger the processing of all the commands accumulated in the query buffer of every client during the pause. Client pause currently supports two modes: * `ALL`: This is the default mode. All client commands are blocked. * `WRITE`: Clients are only blocked if they attempt to execute a write command. For the `WRITE` mode, some commands have special behavior: * [`EVAL`](../eval)/[`EVALSHA`](../evalsha): Will block client for all scripts. * [`PUBLISH`](../publish): Will block client. * [`PFCOUNT`](../pfcount): Will block client. * [`WAIT`](../wait): Acknowledgments will be delayed, so this command will appear blocked. This command is useful as it makes able to switch clients from a Redis instance to another one in a controlled way. For example during an instance upgrade the system administrator could do the following: * Pause the clients using `CLIENT PAUSE` * Wait a few seconds to make sure the replicas processed the latest replication stream from the master. * Turn one of the replicas into a master. * Reconfigure clients to connect with the new master. Since Redis 6.2, the recommended mode for client pause is `WRITE`. This mode will stop all replication traffic, can be aborted with the [`CLIENT UNPAUSE`](../client-unpause) command, and allows reconfiguring the old master without risking accepting writes after the failover. This is also the mode used during cluster failover. For versions before 6.2, it is possible to send `CLIENT PAUSE` in a MULTI/EXEC block together with the `INFO replication` command in order to get the current master offset at the time the clients are blocked. This way it is possible to wait for a specific offset in the replica side in order to make sure all the replication stream was processed. Since Redis 3.2.10 / 4.0.0, this command also prevents keys to be evicted or expired during the time clients are paused. This way the dataset is guaranteed to be static not just from the point of view of clients not being able to write, but also from the point of view of internal operations. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): The command returns OK or an error if the timeout is invalid. Behavior change history ----------------------- * `>= 3.2.0`: Client pause prevents client pause and key eviction as well. History ------- * Starting with Redis version 6.2.0: `CLIENT PAUSE WRITE` mode added along with the `mode` option. redis HGETALL HGETALL ======= ``` HGETALL ``` Syntax ``` HGETALL key ``` Available since: 2.0.0 Time complexity: O(N) where N is the size of the hash. ACL categories: `@read`, `@hash`, `@slow`, Returns all fields and values of the hash stored at `key`. In the returned value, every field name is followed by its value, so the length of the reply is twice the size of the hash. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of fields and their values stored in the hash, or an empty list when `key` does not exist. Examples -------- ``` HSET myhash field1 "Hello" HSET myhash field2 "World" HGETALL myhash ```
programming_docs
redis GEORADIUSBYMEMBER GEORADIUSBYMEMBER ================= ``` GEORADIUSBYMEMBER (deprecated) ``` As of Redis version 6.2.0, this command is regarded as deprecated. It can be replaced by [`GEOSEARCH`](../geosearch) and [`GEOSEARCHSTORE`](../geosearchstore) with the `BYRADIUS` and `FROMMEMBER` arguments when migrating or writing new code. Syntax ``` GEORADIUSBYMEMBER key member radius <M | KM | FT | MI> [WITHCOORD] [WITHDIST] [WITHHASH] [COUNT count [ANY]] [ASC | DESC] [STORE key] [STOREDIST key] ``` Available since: 3.2.0 Time complexity: O(N+log(M)) where N is the number of elements inside the bounding box of the circular area delimited by center and radius and M is the number of items inside the index. ACL categories: `@write`, `@geo`, `@slow`, This command is exactly like [`GEORADIUS`](../georadius) with the sole difference that instead of taking, as the center of the area to query, a longitude and latitude value, it takes the name of a member already existing inside the geospatial index represented by the sorted set. The position of the specified member is used as the center of the query. Please check the example below and the [`GEORADIUS`](../georadius) documentation for more information about the command and its options. Note that [`GEORADIUSBYMEMBER_RO`](../georadiusbymember_ro) is also available since Redis 3.2.10 and Redis 4.0.0 in order to provide a read-only command that can be used in replicas. See the [`GEORADIUS`](../georadius) page for more information. Examples -------- ``` GEOADD Sicily 13.583333 37.316667 "Agrigento" GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" GEORADIUSBYMEMBER Sicily Agrigento 100 km ``` History ------- * Starting with Redis version 7.0.0: Added support for uppercase unit names. redis EVAL EVAL ==== ``` EVAL ``` Syntax ``` EVAL script numkeys [key [key ...]] [arg [arg ...]] ``` Available since: 2.6.0 Time complexity: Depends on the script that is executed. ACL categories: `@slow`, `@scripting`, Invoke the execution of a server-side Lua script. The first argument is the script's source code. Scripts are written in [Lua](https://lua.org) and executed by the embedded [Lua 5.1](https://redis.io/topics/lua-api) interpreter in Redis. The second argument is the number of input key name arguments, followed by all the keys accessed by the script. These names of input keys are available to the script as the [*KEYS* global runtime variable](https://redis.io/topics/lua-api#the-keys-global-variable) Any additional input arguments **should not** represent names of keys. **Important:** to ensure the correct execution of scripts, both in standalone and clustered deployments, all names of keys that a script accesses must be explicitly provided as input key arguments. The script **should only** access keys whose names are given as input arguments. Scripts **should never** access keys with programmatically-generated names or based on the contents of data structures stored in the database. Please refer to the [Redis Programmability](https://redis.io/topics/programmability) and [Introduction to Eval Scripts](https://redis.io/topics/eval-intro) for more information about Lua scripts. Examples -------- The following example will run a script that returns the first argument that it gets. ``` > EVAL "return ARGV[1]" 0 hello "hello" ``` redis ACL ACL === ``` ACL SETUSER ``` Syntax ``` ACL SETUSER username [rule [rule ...]] ``` Available since: 6.0.0 Time complexity: O(N). Where N is the number of rules provided. ACL categories: `@admin`, `@slow`, `@dangerous`, Create an ACL user with the specified rules or modify the rules of an existing user. Manipulate Redis ACL users interactively. If the username does not exist, the command creates the username without any privilege. It then reads from left to right all the [rules](#acl-rules) provided as successive arguments, setting the user ACL rules as specified. If the user already exists, the provided ACL rules are simply applied *in addition* to the rules already set. For example: ``` ACL SETUSER virginia on allkeys +set ``` The above command creates a user called `virginia` who is active(the *on* rule), can access any key (*allkeys* rule), and can call the set command (*+set* rule). Then, you can use another `ACL SETUSER` call to modify the user rules: ``` ACL SETUSER virginia +get ``` The above rule applies the new rule to the user `virginia`, so other than [`SET`](../set), the user `virginia` can now also use the [`GET`](../get) command. Starting from Redis 7.0, ACL rules can also be grouped into multiple distinct sets of rules, called *selectors*. Selectors are added by wrapping the rules in parentheses and providing them just like any other rule. In order to execute a command, either the root permissions (rules defined outside of parenthesis) or any of the selectors (rules defined inside parenthesis) must match the given command. For example: ``` ACL SETUSER virginia on +GET allkeys (+SET ~app1*) ``` This sets a user with two sets of permissions, one defined on the user and one defined with a selector. The root user permissions only allow executing the get command, but can be executed on any keys. The selector then grants a secondary set of permissions: access to the [`SET`](../set) command to be executed on any key that starts with `app1`. Using multiple selectors allows you to grant permissions that are different depending on what keys are being accessed. When we want to be sure to define a user from scratch, without caring if it had previously defined rules associated, we can use the special rule `reset` as first rule, in order to flush all the other existing rules: ``` ACL SETUSER antirez reset [... other rules ...] ``` After resetting a user, its ACL rules revert to the default: inactive, passwordless, can't execute any command nor access any key or channel: ``` > ACL SETUSER antirez reset +OK > ACL LIST 1) "user antirez off -@all" ``` ACL rules are either words like "on", "off", "reset", "allkeys", or are special rules that start with a special character, and are followed by another string (without any space in between), like "+SET". The following documentation is a reference manual about the capabilities of this command, however our [ACL tutorial](https://redis.io/topics/acl) may be a more gentle introduction to how the ACL system works in general. ACL rules --------- Redis ACL rules are split into two categories: rules that define command permissions or *command rules*, and rules that define the user state or *user management rules*. This is a list of all the supported Redis ACL rules: ### Command rules * `~<pattern>`: Adds the specified key pattern (glob style pattern, like in the [`KEYS`](../keys) command), to the list of key patterns accessible by the user. This grants both read and write permissions to keys that match the pattern. You can add multiple key patterns to the same user. Example: `~objects:*` * `%R~<pattern>`: (Available in Redis 7.0 and later) Adds the specified read key pattern. This behaves similar to the regular key pattern but only grants permission to read from keys that match the given pattern. See [key permissions](https://redis.io/topics/acl#key-permissions) for more information. * `%W~<pattern>`: (Available in Redis 7.0 and later) Adds the specified write key pattern. This behaves similar to the regular key pattern but only grants permission to write to keys that match the given pattern. See [key permissions](https://redis.io/topics/acl#key-permissions) for more information. * `%RW~<pattern>`: (Available in Redis 7.0 and later) Alias for `~<pattern>`. * `allkeys`: Alias for `~*`, it allows the user to access all the keys. * `resetkeys`: Removes all the key patterns from the list of key patterns the user can access. * `&<pattern>`: (Available in Redis 6.2 and later) Adds the specified glob style pattern to the list of Pub/Sub channel patterns accessible by the user. You can add multiple channel patterns to the same user. Example: `&chatroom:*` * `allchannels`: Alias for `&*`, it allows the user to access all Pub/Sub channels. * `resetchannels`: Removes all channel patterns from the list of Pub/Sub channel patterns the user can access. * `+<command>`: Adds the command to the list of commands the user can call. Can be used with `|` for allowing subcommands (e.g "+config|get"). * `+@<category>`: Adds all the commands in the specified category to the list of commands the user is able to execute. Example: `+@string` (adds all the string commands). For a list of categories, check the [`ACL CAT`](../acl-cat) command. * `+<command>|first-arg`: Allows a specific first argument of an otherwise disabled command. It is only supported on commands with no sub-commands, and is not allowed as negative form like -SELECT|1, only additive starting with "+". This feature is deprecated and may be removed in the future. * `allcommands`: Alias of `+@all`. Adds all the commands there are in the server, including *future commands* loaded via module, to be executed by this user. * `-<command>`: Remove the command to the list of commands the user can call. Starting Redis 7.0, it can be used with `|` for blocking subcommands (e.g., "-config|set"). * `-@<category>`: Like `+@<category>` but removes all the commands in the category instead of adding them. * `nocommands`: Alias for `-@all`. Removes all the commands, and the user is no longer able to execute anything. ### User management rules * `on`: Set the user as active, it will be possible to authenticate as this user using `AUTH <username> <password>`. * `off`: Set user as not active, it will be impossible to log as this user. Please note that if a user gets disabled (set to off) after there are connections already authenticated with such a user, the connections will continue to work as expected. To also kill the old connections you can use [`CLIENT KILL`](../client-kill) with the user option. An alternative is to delete the user with [`ACL DELUSER`](../acl-deluser), that will result in all the connections authenticated as the deleted user to be disconnected. * `nopass`: The user is set as a *no password* user. It means that it will be possible to authenticate as such user with any password. By default, the `default` special user is set as "nopass". The `nopass` rule will also reset all the configured passwords for the user. * `>password`: Adds the specified clear text password as a hashed password in the list of the users passwords. Every user can have many active passwords, so that password rotation will be simpler. The specified password is not stored as clear text inside the server. Example: `>mypassword`. * `#<hashedpassword>`: Adds the specified hashed password to the list of user passwords. A Redis hashed password is hashed with SHA256 and translated into a hexadecimal string. Example: `#c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f2`. * `<password`: Like `>password` but removes the password instead of adding it. * `!<hashedpassword>`: Like `#<hashedpassword>` but removes the password instead of adding it. * `(<rule list>)`: (Available in Redis 7.0 and later) Creates a new selector to match rules against. Selectors are evaluated after the user permissions, and are evaluated according to the order they are defined. If a command matches either the user permissions or any selector, it is allowed. See [selectors](https://redis.io/docs/management/security/acl#selectors) for more information. * `clearselectors`: (Available in Redis 7.0 and later) Deletes all of the selectors attached to the user. * `reset`: Removes any capability from the user. They are set to off, without passwords, unable to execute any command, unable to access any key. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` on success. If the rules contain errors, the error is returned. Examples -------- ``` > ACL SETUSER alan allkeys +@string +@set -SADD >alanpassword +OK > ACL SETUSER antirez heeyyyy (error) ERR Error in ACL SETUSER modifier 'heeyyyy': Syntax error ``` History ------- * Starting with Redis version 6.2.0: Added Pub/Sub channel patterns. * Starting with Redis version 7.0.0: Added selectors and key based permissions. redis SINTERCARD SINTERCARD ========== ``` SINTERCARD ``` Syntax ``` SINTERCARD numkeys key [key ...] [LIMIT limit] ``` Available since: 7.0.0 Time complexity: O(N\*M) worst case where N is the cardinality of the smallest set and M is the number of sets. ACL categories: `@read`, `@set`, `@slow`, This command is similar to [`SINTER`](../sinter), but instead of returning the result set, it returns just the cardinality of the result. Returns the cardinality of the set which would result from the intersection of all the given sets. Keys that do not exist are considered to be empty sets. With one of the keys being an empty set, the resulting set is also empty (since set intersection with an empty set always results in an empty set). By default, the command calculates the cardinality of the intersection of all given sets. When provided with the optional `LIMIT` argument (which defaults to 0 and means unlimited), if the intersection cardinality reaches limit partway through the computation, the algorithm will exit and yield limit as the cardinality. Such implementation ensures a significant speedup for queries where the limit is lower than the actual intersection cardinality. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of elements in the resulting intersection. Examples -------- ``` SADD key1 "a" SADD key1 "b" SADD key1 "c" SADD key1 "d" SADD key2 "c" SADD key2 "d" SADD key2 "e" SINTER key1 key2 SINTERCARD 2 key1 key2 SINTERCARD 2 key1 key2 LIMIT 1 ``` redis GEORADIUS_RO GEORADIUS\_RO ============= ``` GEORADIUS_RO (deprecated) ``` As of Redis version 6.2.0, this command is regarded as deprecated. It can be replaced by [`GEOSEARCH`](../geosearch) with the `BYRADIUS` argument when migrating or writing new code. Syntax ``` GEORADIUS_RO key longitude latitude radius <M | KM | FT | MI> [WITHCOORD] [WITHDIST] [WITHHASH] [COUNT count [ANY]] [ASC | DESC] ``` Available since: 3.2.10 Time complexity: O(N+log(M)) where N is the number of elements inside the bounding box of the circular area delimited by center and radius and M is the number of items inside the index. ACL categories: `@read`, `@geo`, `@slow`, Read-only variant of the [`GEORADIUS`](../georadius) command. This command is identical to the [`GEORADIUS`](../georadius) command, except that it doesn't support the optional `STORE` and `STOREDIST` parameters. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): An array with each entry being the corresponding result of the subcommand given at the same position. History ------- * Starting with Redis version 6.2.0: Added the `ANY` option for `COUNT`. redis GEORADIUSBYMEMBER_RO GEORADIUSBYMEMBER\_RO ===================== ``` GEORADIUSBYMEMBER_RO (deprecated) ``` As of Redis version 6.2.0, this command is regarded as deprecated. It can be replaced by [`GEOSEARCH`](../geosearch) with the `BYRADIUS` and `FROMMEMBER` arguments when migrating or writing new code. Syntax ``` GEORADIUSBYMEMBER_RO key member radius <M | KM | FT | MI> [WITHCOORD] [WITHDIST] [WITHHASH] [COUNT count [ANY]] [ASC | DESC] ``` Available since: 3.2.10 Time complexity: O(N+log(M)) where N is the number of elements inside the bounding box of the circular area delimited by center and radius and M is the number of items inside the index. ACL categories: `@read`, `@geo`, `@slow`, Read-only variant of the [`GEORADIUSBYMEMBER`](../georadiusbymember) command. This command is identical to the [`GEORADIUSBYMEMBER`](../georadiusbymember) command, except that it doesn't support the optional `STORE` and `STOREDIST` parameters. redis PFMERGE PFMERGE ======= ``` PFMERGE ``` Syntax ``` PFMERGE destkey [sourcekey [sourcekey ...]] ``` Available since: 2.8.9 Time complexity: O(N) to merge N HyperLogLogs, but with high constant times. ACL categories: `@write`, `@hyperloglog`, `@slow`, Merge multiple HyperLogLog values into a unique value that will approximate the cardinality of the union of the observed Sets of the source HyperLogLog structures. The computed merged HyperLogLog is set to the destination variable, which is created if does not exist (defaulting to an empty HyperLogLog). If the destination variable exists, it is treated as one of the source sets and its cardinality will be included in the cardinality of the computed HyperLogLog. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): The command just returns `OK`. Examples -------- ``` PFADD hll1 foo bar zap a PFADD hll2 a b c foo PFMERGE hll3 hll1 hll2 PFCOUNT hll3 ``` redis SLOWLOG SLOWLOG ======= ``` SLOWLOG GET ``` Syntax ``` SLOWLOG GET [count] ``` Available since: 2.2.12 Time complexity: O(N) where N is the number of entries returned ACL categories: `@admin`, `@slow`, `@dangerous`, The `SLOWLOG GET` command returns entries from the slow log in chronological order. The Redis Slow Log is a system to log queries that exceeded a specified execution time. The execution time does not include I/O operations like talking with the client, sending the reply and so forth, but just the time needed to actually execute the command (this is the only stage of command execution where the thread is blocked and can not serve other requests in the meantime). A new entry is added to the slow log whenever a command exceeds the execution time threshold defined by the `slowlog-log-slower-than` configuration directive. The maximum number of entries in the slow log is governed by the `slowlog-max-len` configuration directive. By default the command returns latest ten entries in the log. The optional `count` argument limits the number of returned entries, so the command returns at most up to `count` entries, the special number -1 means return all entries. Each entry from the slow log is comprised of the following six values: 1. A unique progressive identifier for every slow log entry. 2. The unix timestamp at which the logged command was processed. 3. The amount of time needed for its execution, in microseconds. 4. The array composing the arguments of the command. 5. Client IP address and port. 6. Client name if set via the [`CLIENT SETNAME`](../client-setname) command. The entry's unique ID can be used in order to avoid processing slow log entries multiple times (for instance you may have a script sending you an email alert for every new slow log entry). The ID is never reset in the course of the Redis server execution, only a server restart will reset it. @reply [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a list of slow log entries. History ------- * Starting with Redis version 4.0.0: Added client IP address, port and name to the reply. redis MGET MGET ==== ``` MGET ``` Syntax ``` MGET key [key ...] ``` Available since: 1.0.0 Time complexity: O(N) where N is the number of keys to retrieve. ACL categories: `@read`, `@string`, `@fast`, Returns the values of all specified keys. For every key that does not hold a string value or does not exist, the special value `nil` is returned. Because of this, the operation never fails. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of values at the specified keys. Examples -------- ``` SET key1 "Hello" SET key2 "World" MGET key1 key2 nonexisting ``` redis SELECT SELECT ====== ``` SELECT ``` Syntax ``` SELECT index ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@fast`, `@connection`, Select the Redis logical database having the specified zero-based numeric index. New connections always use the database 0. Selectable Redis databases are a form of namespacing: all databases are still persisted in the same RDB / AOF file. However different databases can have keys with the same name, and commands like [`FLUSHDB`](../flushdb), [`SWAPDB`](../swapdb) or [`RANDOMKEY`](../randomkey) work on specific databases. In practical terms, Redis databases should be used to separate different keys belonging to the same application (if needed), and not to use a single Redis instance for multiple unrelated applications. When using Redis Cluster, the `SELECT` command cannot be used, since Redis Cluster only supports database zero. In the case of a Redis Cluster, having multiple databases would be useless and an unnecessary source of complexity. Commands operating atomically on a single database would not be possible with the Redis Cluster design and goals. Since the currently selected database is a property of the connection, clients should track the currently selected database and re-select it on reconnection. While there is no command in order to query the selected database in the current connection, the [`CLIENT LIST`](../client-list) output shows, for each client, the currently selected database. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings)
programming_docs
redis SSCAN SSCAN ===== ``` SSCAN ``` Syntax ``` SSCAN key cursor [MATCH pattern] [COUNT count] ``` Available since: 2.8.0 Time complexity: O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection. ACL categories: `@read`, `@set`, `@slow`, See [`SCAN`](../scan) for `SSCAN` documentation. redis LMPOP LMPOP ===== ``` LMPOP ``` Syntax ``` LMPOP numkeys key [key ...] <LEFT | RIGHT> [COUNT count] ``` Available since: 7.0.0 Time complexity: O(N+M) where N is the number of provided keys and M is the number of elements returned. ACL categories: `@write`, `@list`, `@slow`, Pops one or more elements from the first non-empty list key from the list of provided key names. `LMPOP` and [`BLMPOP`](../blmpop) are similar to the following, more limited, commands: * [`LPOP`](../lpop) or [`RPOP`](../rpop) which take only one key, and can return multiple elements. * [`BLPOP`](../blpop) or [`BRPOP`](../brpop) which take multiple keys, but return only one element from just one key. See [`BLMPOP`](../blmpop) for the blocking variant of this command. Elements are popped from either the left or right of the first non-empty list based on the passed argument. The number of returned elements is limited to the lower between the non-empty list's length, and the count argument (which defaults to 1). Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): specifically: * A `nil` when no element could be popped. * A two-element array with the first element being the name of the key from which elements were popped, and the second element is an array of elements. Examples -------- ``` LMPOP 2 non1 non2 LEFT COUNT 10 LPUSH mylist "one" "two" "three" "four" "five" LMPOP 1 mylist LEFT LRANGE mylist 0 -1 LMPOP 1 mylist RIGHT COUNT 10 LPUSH mylist "one" "two" "three" "four" "five" LPUSH mylist2 "a" "b" "c" "d" "e" LMPOP 2 mylist mylist2 right count 3 LRANGE mylist 0 -1 LMPOP 2 mylist mylist2 right count 5 LMPOP 2 mylist mylist2 right count 10 EXISTS mylist mylist2 ``` redis BZMPOP BZMPOP ====== ``` BZMPOP ``` Syntax ``` BZMPOP timeout numkeys key [key ...] <MIN | MAX> [COUNT count] ``` Available since: 7.0.0 Time complexity: O(K) + O(M\*log(N)) where K is the number of provided keys, N being the number of elements in the sorted set, and M being the number of elements popped. ACL categories: `@write`, `@sortedset`, `@slow`, `@blocking`, `BZMPOP` is the blocking variant of [`ZMPOP`](../zmpop). When any of the sorted sets contains elements, this command behaves exactly like [`ZMPOP`](../zmpop). When used inside a [`MULTI`](../multi)/[`EXEC`](../exec) block, this command behaves exactly like [`ZMPOP`](../zmpop). When all sorted sets are empty, Redis will block the connection until another client adds members to one of the keys or until the `timeout` (a double value specifying the maximum number of seconds to block) elapses. A `timeout` of zero can be used to block indefinitely. See [`ZMPOP`](../zmpop) for more information. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): specifically: * A `nil` when no element could be popped. * A two-element array with the first element being the name of the key from which elements were popped, and the second element is an array of the popped elements. Every entry in the elements array is also an array that contains the member and its score. redis TS.DEL TS.DEL ====== ``` TS.DEL ``` Syntax ``` TS.DEL key fromTimestamp toTimestamp ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [TimeSeries 1.6.0](https://redis.io/docs/stack/timeseries) Time complexity: O(N) where N is the number of data points that will be removed Delete all samples between two timestamps for a given time series [Examples](#examples) Required arguments ------------------ `key` is key name for the time series. `fromTimestamp` is start timestamp for the range deletion. `toTimestamp` is end timestamp for the range deletion. The given timestamp interval is closed (inclusive), meaning that samples whose timestamp eqauls the `fromTimestamp` or `toTimestamp` are also deleted. **Notes:** * If fromTimestamp is older than the retention period compared to the maximum existing timestamp, the deletion is discarded and an error is returned. * When deleting a sample from a time series for which compaction rules are defined: + If all the original samples for an affected compaction bucket are available, the compacted value is recalculated based on the remaining original samples, or removed if all original samples within the compaction bucket were deleted. + If some or all the original samples for an affected compaction bucket were expired, the deletion is discarded and an error is returned. * Explicitly deleting samples from a compacted time series may result in inconsistencies between the raw and the compacted data. The compaction process may override such samples. That being said, it is safe to explicitly delete samples from a compacted time series beyond the retention period of the original time series. Return value ------------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): The number of samples that were deleted, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors). Examples -------- **Delete range of data points** Create time series for temperature in Tel Aviv and Jerusalem, then add different temperature samples. ``` 127.0.0.1:6379> TS.CREATE temp:TLV LABELS type temp location TLV OK 127.0.0.1:6379> TS.CREATE temp:JLM LABELS type temp location JLM OK 127.0.0.1:6379> TS.MADD temp:TLV 1000 30 temp:TLV 1010 35 temp:TLV 1020 9999 temp:TLV 1030 40 1) (integer) 1000 2) (integer) 1010 3) (integer) 1020 4) (integer) 1030 127.0.0.1:6379> TS.MADD temp:JLM 1005 30 temp:JLM 1015 35 temp:JLM 1025 9999 temp:JLM 1035 40 1) (integer) 1005 2) (integer) 1015 3) (integer) 1025 4) (integer) 1035 ``` Delete the range of data points for temperature in Tel Aviv. ``` 127.0.0.1:6379> TS.DEL temp:TLV 1000 1030 (integer) 4 ``` See also -------- [`TS.ADD`](../ts.add) Related topics -------------- [RedisTimeSeries](https://redis.io/docs/stack/timeseries) redis COMMAND COMMAND ======= ``` COMMAND GETKEYS ``` Syntax ``` COMMAND GETKEYS command [arg [arg ...]] ``` Available since: 2.8.13 Time complexity: O(N) where N is the number of arguments to the command ACL categories: `@slow`, `@connection`, Returns [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of keys from a full Redis command. `COMMAND GETKEYS` is a helper command to let you find the keys from a full Redis command. [`COMMAND`](../command) provides information on how to find the key names of each command (see `firstkey`, [key specifications](https://redis.io/topics/key-specs#logical-operation-flags), and `movablekeys`), but in some cases it's not possible to find keys of certain commands and then the entire command must be parsed to discover some / all key names. You can use `COMMAND GETKEYS` or [`COMMAND GETKEYSANDFLAGS`](../command-getkeysandflags) to discover key names directly from how Redis parses the commands. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of keys from your command. Examples -------- ``` COMMAND GETKEYS MSET a b c d e f COMMAND GETKEYS EVAL "not consulted" 3 key1 key2 key3 arg1 arg2 arg3 argN COMMAND GETKEYS SORT mylist ALPHA STORE outlist ``` redis TDIGEST.ADD TDIGEST.ADD =========== ``` TDIGEST.ADD ``` Syntax ``` TDIGEST.ADD key value [value ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.4.0](https://redis.io/docs/stack/bloom) Time complexity: O(N) , where N is the number of samples to add Adds one or more observations to a t-digest sketch. Required arguments ------------------ `key` is key name for an existing t-digest sketch. `value` is value of an observation (floating-point). Return value ------------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) - `OK` if executed correctly, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) otherwise. Examples -------- ``` redis> TDIGEST.ADD t 1 2 3 OK ``` ``` redis> TDIGEST.ADD t string (error) ERR T-Digest: error parsing val parameter ``` redis COMMAND COMMAND ======= ``` COMMAND GETKEYSANDFLAGS ``` Syntax ``` COMMAND GETKEYSANDFLAGS command [arg [arg ...]] ``` Available since: 7.0.0 Time complexity: O(N) where N is the number of arguments to the command ACL categories: `@slow`, `@connection`, Returns [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of keys from a full Redis command and their usage flags. `COMMAND GETKEYSANDFLAGS` is a helper command to let you find the keys from a full Redis command together with flags indicating what each key is used for. [`COMMAND`](../command) provides information on how to find the key names of each command (see `firstkey`, [key specifications](https://redis.io/topics/key-specs#logical-operation-flags), and `movablekeys`), but in some cases it's not possible to find keys of certain commands and then the entire command must be parsed to discover some / all key names. You can use [`COMMAND GETKEYS`](../command-getkeys) or `COMMAND GETKEYSANDFLAGS` to discover key names directly from how Redis parses the commands. Refer to [key specifications](https://redis.io/topics/key-specs#logical-operation-flags) for information about the meaning of the key flags. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of keys from your command. Each element of the array is an array containing key name in the first entry, and flags in the second. Examples -------- ``` COMMAND GETKEYS MSET a b c d e f COMMAND GETKEYS EVAL "not consulted" 3 key1 key2 key3 arg1 arg2 arg3 argN COMMAND GETKEYSANDFLAGS LMOVE mylist1 mylist2 left left ``` redis SET SET === ``` SET ``` Syntax ``` SET key value [NX | XX] [GET] [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL] ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@write`, `@string`, `@slow`, Set `key` to hold the string `value`. If `key` already holds a value, it is overwritten, regardless of its type. Any previous time to live associated with the key is discarded on successful `SET` operation. Options ------- The `SET` command supports a set of options that modify its behavior: * `EX` *seconds* -- Set the specified expire time, in seconds. * `PX` *milliseconds* -- Set the specified expire time, in milliseconds. * `EXAT` *timestamp-seconds* -- Set the specified Unix time at which the key will expire, in seconds. * `PXAT` *timestamp-milliseconds* -- Set the specified Unix time at which the key will expire, in milliseconds. * `NX` -- Only set the key if it does not already exist. * `XX` -- Only set the key if it already exist. * `KEEPTTL` -- Retain the time to live associated with the key. * `GET` -- Return the old string stored at key, or nil if key did not exist. An error is returned and `SET` aborted if the value stored at key is not a string. Note: Since the `SET` command options can replace [`SETNX`](../setnx), [`SETEX`](../setex), [`PSETEX`](../psetex), [`GETSET`](../getset), it is possible that in future versions of Redis these commands will be deprecated and finally removed. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if `SET` was executed correctly. [Null reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): `(nil)` if the `SET` operation was not performed because the user specified the `NX` or `XX` option but the condition was not met. If the command is issued with the `GET` option, the above does not apply. It will instead reply as follows, regardless if the `SET` was actually performed: [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the old string value stored at key. [Null reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): `(nil)` if the key did not exist. Examples -------- ``` SET mykey "Hello" GET mykey SET anotherkey "will expire in a minute" EX 60 ``` Patterns -------- **Note:** The following pattern is discouraged in favor of [the Redlock algorithm](https://redis.io/topics/distlock) which is only a bit more complex to implement, but offers better guarantees and is fault tolerant. The command `SET resource-name anystring NX EX max-lock-time` is a simple way to implement a locking system with Redis. A client can acquire the lock if the above command returns `OK` (or retry after some time if the command returns Nil), and remove the lock just using [`DEL`](../del). The lock will be auto-released after the expire time is reached. It is possible to make this system more robust modifying the unlock schema as follows: * Instead of setting a fixed string, set a non-guessable large random string, called token. * Instead of releasing the lock with [`DEL`](../del), send a script that only removes the key if the value matches. This avoids that a client will try to release the lock after the expire time deleting the key created by another client that acquired the lock later. An example of unlock script would be similar to the following: ``` if redis.call("get",KEYS[1]) == ARGV[1] then return redis.call("del",KEYS[1]) else return 0 end ``` The script should be called with `EVAL ...script... 1 resource-name token-value` History ------- * Starting with Redis version 2.6.12: Added the `EX`, `PX`, `NX` and `XX` options. * Starting with Redis version 6.0.0: Added the `KEEPTTL` option. * Starting with Redis version 6.2.0: Added the `GET`, `EXAT` and `PXAT` option. * Starting with Redis version 7.0.0: Allowed the `NX` and `GET` options to be used together. redis ACL ACL === ``` ACL DELUSER ``` Syntax ``` ACL DELUSER username [username ...] ``` Available since: 6.0.0 Time complexity: O(1) amortized time considering the typical user. ACL categories: `@admin`, `@slow`, `@dangerous`, Delete all the specified ACL users and terminate all the connections that are authenticated with such users. Note: the special `default` user cannot be removed from the system, this is the default user that every new connection is authenticated with. The list of users may include usernames that do not exist, in such case no operation is performed for the non existing users. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): The number of users that were deleted. This number will not always match the number of arguments since certain users may not exist. Examples -------- ``` > ACL DELUSER antirez 1 ``` redis SYNC SYNC ==== ``` SYNC ``` Syntax ``` SYNC ``` Available since: 1.0.0 Time complexity: ACL categories: `@admin`, `@slow`, `@dangerous`, Initiates a replication stream from the master. The `SYNC` command is called by Redis replicas for initiating a replication stream from the master. It has been replaced in newer versions of Redis by [`PSYNC`](../psync). For more information about replication in Redis please check the [replication page](https://redis.io/topics/replication). Return ------ **Non standard return value**, a bulk transfer of the data followed by [`PING`](../ping) and write requests from the master. redis FT.INFO FT.INFO ======= ``` FT.INFO ``` Syntax ``` FT.INFO index ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.0.0](https://redis.io/docs/stack/search) Time complexity: O(1) Return information and statistics on the index [Examples](#examples) Required arguments ------------------ `index` is full-text index name. You must first create the index using [`FT.CREATE`](../ft.create). Return ------ FT.INFO returns an array reply with pairs of keys and values. Returned values include: * `index_definition`: reflection of [`FT.CREATE`](../ft.create) command parameters. * `fields`: index schema - field names, types, and attributes. * Number of documents. * Number of distinct terms. * Average bytes per record. * Size and capacity of the index buffers. * Indexing state and percentage as well as failures: + `indexing`: whether of not the index is being scanned in the background. + `percent_indexed`: progress of background indexing (1 if complete). + `hash_indexing_failures`: number of failures due to operations not compatible with index schema. Optional statistics include: * `garbage collector` for all options other than NOGC. * `cursors` if a cursor exists for the index. * `stopword lists` if a custom stopword list is used. Examples -------- **Return statistics about an index** ``` 127.0.0.1:6379> FT.INFO idx 1) index\_name 2) wikipedia 3) index\_options 4) (empty array) 11) score\_field 12) \_\_score 13) payload\_field 14) \_\_payload 7) fields 8) 1) 1) title 2) type 3) TEXT 4) WEIGHT 5) "1" 6) SORTABLE 2) 1) body 2) type 3) TEXT 4) WEIGHT 5) "1" 3) 1) id 2) type 3) NUMERIC 4) 1) subject location 2) type 3) GEO 9) num\_docs 10) "0" 11) max\_doc\_id 12) "345678" 13) num\_terms 14) "691356" 15) num\_records 16) "0" 17) inverted\_sz\_mb 18) "0" 19) vector\_index\_sz\_mb 20) "0" 21) total\_inverted\_index\_blocks 22) "933290" 23) offset\_vectors\_sz\_mb 24) "0.65932846069335938" 25) doc\_table\_size\_mb 26) "29.893482208251953" 27) sortable\_values\_size\_mb 28) "11.432285308837891" 29) key\_table\_size\_mb 30) "1.239776611328125e-05" 31) records\_per\_doc\_avg 32) "-nan" 33) bytes\_per\_record\_avg 34) "-nan" 35) offsets\_per\_term\_avg 36) "inf" 37) offset\_bits\_per\_record\_avg 38) "8" 39) hash\_indexing\_failures 40) "0" 41) indexing 42) "0" 43) percent\_indexed 44) "1" 45) number\_of\_uses 46) 1 47) gc\_stats 48) 1) bytes\_collected 2) "4148136" 3) total\_ms\_run 4) "14796" 5) total\_cycles 6) "1" 7) average\_cycle\_time\_ms 8) "14796" 9) last\_run\_time\_ms 10) "14796" 11) gc\_numeric\_trees\_missed 12) "0" 13) gc\_blocks\_denied 14) "0" 49) cursor\_stats 50) 1) global\_idle 2) (integer) 0 3) global\_total 4) (integer) 0 5) index\_capacity 6) (integer) 128 7) index\_total 8) (integer) 0 51) stopwords\_list 52) 1) "tlv" 2) "summer" 3) "2020" ``` See also -------- [`FT.CREATE`](../ft.create) | [`FT.SEARCH`](../ft.search) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) redis CF.LOADCHUNK CF.LOADCHUNK ============ ``` CF.LOADCHUNK ``` Syntax ``` CF.LOADCHUNK key iterator data ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(n), where n is the capacity Restores a filter previously saved using `SCANDUMP`. See the `SCANDUMP` command for example usage. This command overwrites any cuckoo filter stored under `key`. Make sure that the cuckoo filter is not be modified between invocations. ### Parameters * **key**: Name of the key to restore * **iter**: Iterator value associated with `data` (returned by `SCANDUMP`) * **data**: Current data chunk (returned by `SCANDUMP`) Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) - `OK` if executed correctly, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) otherwise. Examples -------- See BF.SCANDUMP for an example. redis SWAPDB SWAPDB ====== ``` SWAPDB ``` Syntax ``` SWAPDB index1 index2 ``` Available since: 4.0.0 Time complexity: O(N) where N is the count of clients watching or blocking on keys from both databases. ACL categories: `@keyspace`, `@write`, `@fast`, `@dangerous`, This command swaps two Redis databases, so that immediately all the clients connected to a given database will see the data of the other database, and the other way around. Example: ``` SWAPDB 0 1 ``` This will swap database 0 with database 1. All the clients connected with database 0 will immediately see the new data, exactly like all the clients connected with database 1 will see the data that was formerly of database 0. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if `SWAPDB` was executed correctly. Examples -------- ``` SWAPDB 0 1 ```
programming_docs
redis TDIGEST.CREATE TDIGEST.CREATE ============== ``` TDIGEST.CREATE ``` Syntax ``` TDIGEST.CREATE key [COMPRESSION compression] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.4.0](https://redis.io/docs/stack/bloom) Time complexity: O(1) Allocates memory and initializes a new t-digest sketch. Required arguments ------------------ `key` is key name for this new t-digest sketch. Optional arguments ------------------ `COMPRESSION compression` is a controllable tradeoff between accuracy and memory consumption. 100 is a common value for normal uses. 1000 is more accurate. If no value is passed by default the compression will be 100. For more information on scaling of accuracy versus the compression parameter see [*The t-digest: Efficient estimates of distributions*](https://www.sciencedirect.com/science/article/pii/S2665963820300403). Return value ------------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) - `OK` if executed correctly, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) otherwise. Examples -------- ``` redis> TDIGEST.CREATE t COMPRESSION 100 OK ``` redis XINFO XINFO ===== ``` XINFO GROUPS ``` Syntax ``` XINFO GROUPS key ``` Available since: 5.0.0 Time complexity: O(1) ACL categories: `@read`, `@stream`, `@slow`, This command returns the list of all consumers groups of the stream stored at `<key>`. By default, only the following information is provided for each of the groups: * **name**: the consumer group's name * **consumers**: the number of consumers in the group * **pending**: the length of the group's pending entries list (PEL), which are messages that were delivered but are yet to be acknowledged * **last-delivered-id**: the ID of the last entry delivered the group's consumers * **entries-read**: the logical "read counter" of the last entry delivered to group's consumers * **lag**: the number of entries in the stream that are still waiting to be delivered to the group's consumers, or a NULL when that number can't be determined. ### Consumer group lag The lag of a given consumer group is the number of entries in the range between the group's `entries_read` and the stream's `entries_added`. Put differently, it is the number of entries that are yet to be delivered to the group's consumers. The values and trends of this metric are helpful in making scaling decisions about the consumer group. You can address high lag values by adding more consumers to the group, whereas low values may indicate that you can remove consumers from the group to scale it down. Redis reports the lag of a consumer group by keeping two counters: the number of all entries added to the stream and the number of logical reads made by the consumer group. The lag is the difference between these two. The stream's counter (the `entries_added` field of the [`XINFO STREAM`](../xinfo-stream) command) is incremented by one with every [`XADD`](../xadd) and counts all of the entries added to the stream during its lifetime. The consumer group's counter, `entries_read`, is the logical counter of entries that the group had read. It is important to note that this counter is only a heuristic rather than an accurate counter, and therefore the use of the term "logical". The counter attempts to reflect the number of entries that the group **should have read** to get to its current `last-delivered-id`. The `entries_read` counter is accurate only in a perfect world, where a consumer group starts at the stream's first entry and processes all of its entries (i.e., no entries deleted before processing). There are two special cases in which this mechanism is unable to report the lag: 1. A consumer group is created or set with an arbitrary last delivered ID (the [`XGROUP CREATE`](../xgroup-create) and [`XGROUP SETID`](../xgroup-setid) commands, respectively). An arbitrary ID is any ID that isn't the ID of the stream's first entry, its last entry or the zero ("0-0") ID. 2. One or more entries between the group's `last-delivered-id` and the stream's `last-generated-id` were deleted (with [`XDEL`](../xdel) or a trimming operation). In both cases, the group's read counter is considered invalid, and the returned value is set to NULL to signal that the lag isn't currently available. However, the lag is only temporarily unavailable. It is restored automatically during regular operation as consumers keep processing messages. Once the consumer group delivers the last message in the stream to its members, it will be set with the correct logical read counter, and tracking its lag can be resumed. @reply [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a list of consumer groups. Examples -------- ``` > XINFO GROUPS mystream 1) 1) "name" 2) "mygroup" 3) "consumers" 4) (integer) 2 5) "pending" 6) (integer) 2 7) "last-delivered-id" 8) "1638126030001-0" 9) "entries-read" 10) (integer) 2 11) "lag" 12) (integer) 0 2) 1) "name" 2) "some-other-group" 3) "consumers" 4) (integer) 1 5) "pending" 6) (integer) 0 7) "last-delivered-id" 8) "1638126028070-0" 9) "entries-read" 10) (integer) 1 11) "lag" 12) (integer) 1 ``` History ------- * Starting with Redis version 7.0.0: Added the `entries-read` and `lag` fields redis ZPOPMIN ZPOPMIN ======= ``` ZPOPMIN ``` Syntax ``` ZPOPMIN key [count] ``` Available since: 5.0.0 Time complexity: O(log(N)\*M) with N being the number of elements in the sorted set, and M being the number of elements popped. ACL categories: `@write`, `@sortedset`, `@fast`, Removes and returns up to `count` members with the lowest scores in the sorted set stored at `key`. When left unspecified, the default value for `count` is 1. Specifying a `count` value that is higher than the sorted set's cardinality will not produce an error. When returning multiple elements, the one with the lowest score will be the first, followed by the elements with greater scores. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of popped elements and scores. Examples -------- ``` ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZPOPMIN myzset ``` redis SDIFF SDIFF ===== ``` SDIFF ``` Syntax ``` SDIFF key [key ...] ``` Available since: 1.0.0 Time complexity: O(N) where N is the total number of elements in all given sets. ACL categories: `@read`, `@set`, `@slow`, Returns the members of the set resulting from the difference between the first set and all the successive sets. For example: ``` key1 = {a,b,c,d} key2 = {c} key3 = {a,c,e} SDIFF key1 key2 key3 = {b,d} ``` Keys that do not exist are considered to be empty sets. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list with members of the resulting set. Examples -------- ``` SADD key1 "a" SADD key1 "b" SADD key1 "c" SADD key2 "c" SADD key2 "d" SADD key2 "e" SDIFF key1 key2 ``` redis TDIGEST.MAX TDIGEST.MAX =========== ``` TDIGEST.MAX ``` Syntax ``` TDIGEST.MAX key ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.4.0](https://redis.io/docs/stack/bloom) Time complexity: O(1) Returns the maximum observation value from a t-digest sketch. Required arguments ------------------ `key` is key name for an existing t-digest sketch. Return value ------------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) of maximum observation value from a sketch. The result is always accurate. 'nan' if the sketch is empty. Examples -------- ``` redis> TDIGEST.CREATE t OK redis> TDIGEST.MAX t "nan" redis> TDIGEST.ADD t 3 4 1 2 5 OK redis>TDIGEST.MAX t "5" ``` redis ACL ACL === ``` ACL GENPASS ``` Syntax ``` ACL GENPASS [bits] ``` Available since: 6.0.0 Time complexity: O(1) ACL categories: `@slow`, ACL users need a solid password in order to authenticate to the server without security risks. Such password does not need to be remembered by humans, but only by computers, so it can be very long and strong (unguessable by an external attacker). The `ACL GENPASS` command generates a password starting from /dev/urandom if available, otherwise (in systems without /dev/urandom) it uses a weaker system that is likely still better than picking a weak password by hand. By default (if /dev/urandom is available) the password is strong and can be used for other uses in the context of a Redis application, for instance in order to create unique session identifiers or other kind of unguessable and not colliding IDs. The password generation is also very cheap because we don't really ask /dev/urandom for bits at every execution. At startup Redis creates a seed using /dev/urandom, then it will use SHA256 in counter mode, with HMAC-SHA256(seed,counter) as primitive, in order to create more random bytes as needed. This means that the application developer should be feel free to abuse `ACL GENPASS` to create as many secure pseudorandom strings as needed. The command output is a hexadecimal representation of a binary string. By default it emits 256 bits (so 64 hex characters). The user can provide an argument in form of number of bits to emit from 1 to 1024 to change the output length. Note that the number of bits provided is always rounded to the next multiple of 4. So for instance asking for just 1 bit password will result in 4 bits to be emitted, in the form of a single hex character. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): by default 64 bytes string representing 256 bits of pseudorandom data. Otherwise if an argument if needed, the output string length is the number of specified bits (rounded to the next multiple of 4) divided by 4. Examples -------- ``` > ACL GENPASS "dd721260bfe1b3d9601e7fbab36de6d04e2e67b0ef1c53de59d45950db0dd3cc" > ACL GENPASS 32 "355ef3dd" > ACL GENPASS 5 "90" ``` redis FUNCTION FUNCTION ======== ``` FUNCTION RESTORE ``` Syntax ``` FUNCTION RESTORE serialized-value [FLUSH | APPEND | REPLACE] ``` Available since: 7.0.0 Time complexity: O(N) where N is the number of functions on the payload ACL categories: `@write`, `@slow`, `@scripting`, Restore libraries from the serialized payload. You can use the optional *policy* argument to provide a policy for handling existing libraries. The following policies are allowed: * **APPEND:** appends the restored libraries to the existing libraries and aborts on collision. This is the default policy. * **FLUSH:** deletes all existing libraries before restoring the payload. * **REPLACE:** appends the restored libraries to the existing libraries, replacing any existing ones in case of name collisions. Note that this policy doesn't prevent function name collisions, only libraries. For more information please refer to [Introduction to Redis Functions](https://redis.io/topics/functions-intro). Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) redis PEXPIRETIME PEXPIRETIME =========== ``` PEXPIRETIME ``` Syntax ``` PEXPIRETIME key ``` Available since: 7.0.0 Time complexity: O(1) ACL categories: `@keyspace`, `@read`, `@fast`, `PEXPIRETIME` has the same semantic as [`EXPIRETIME`](../expiretime), but returns the absolute Unix expiration timestamp in milliseconds instead of seconds. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): Expiration Unix timestamp in milliseconds, or a negative value in order to signal an error (see the description below). * The command returns `-1` if the key exists but has no associated expiration time. * The command returns `-2` if the key does not exist. Examples -------- ``` SET mykey "Hello" PEXPIREAT mykey 33177117420000 PEXPIRETIME mykey ``` redis SDIFFSTORE SDIFFSTORE ========== ``` SDIFFSTORE ``` Syntax ``` SDIFFSTORE destination key [key ...] ``` Available since: 1.0.0 Time complexity: O(N) where N is the total number of elements in all given sets. ACL categories: `@write`, `@set`, `@slow`, This command is equal to [`SDIFF`](../sdiff), but instead of returning the resulting set, it is stored in `destination`. If `destination` already exists, it is overwritten. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of elements in the resulting set. Examples -------- ``` SADD key1 "a" SADD key1 "b" SADD key1 "c" SADD key2 "c" SADD key2 "d" SADD key2 "e" SDIFFSTORE key key1 key2 SMEMBERS key ``` redis SLAVEOF SLAVEOF ======= ``` SLAVEOF (deprecated) ``` As of Redis version 5.0.0, this command is regarded as deprecated. It can be replaced by [`REPLICAOF`](../replicaof) when migrating or writing new code. Syntax ``` SLAVEOF host port ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, **A note about the word slave used in this man page and command name**: starting with Redis version 5, if not for backward compatibility, the Redis project no longer uses the word slave. Please use the new command [`REPLICAOF`](../replicaof). The command `SLAVEOF` will continue to work for backward compatibility. The `SLAVEOF` command can change the replication settings of a replica on the fly. If a Redis server is already acting as replica, the command `SLAVEOF` NO ONE will turn off the replication, turning the Redis server into a MASTER. In the proper form `SLAVEOF` hostname port will make the server a replica of another server listening at the specified hostname and port. If a server is already a replica of some master, `SLAVEOF` hostname port will stop the replication against the old server and start the synchronization against the new one, discarding the old dataset. The form `SLAVEOF` NO ONE will stop replication, turning the server into a MASTER, but will not discard the replication. So, if the old master stops working, it is possible to turn the replica into a master and set the application to use this new master in read/write. Later when the other Redis server is fixed, it can be reconfigured to work as a replica. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) redis JSON.STRLEN JSON.STRLEN =========== ``` JSON.STRLEN ``` Syntax ``` JSON.STRLEN key [path] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: O(1) when path is evaluated to a single value, O(N) when path is evaluated to multiple values, where N is the size of the key Report the length of the JSON String at `path` in `key` [Examples](#examples) Required arguments ------------------ `key` is key to parse. Optional arguments ------------------ `path` is JSONPath to specify. Default is root `$`, if not provided. Returns null if the `key` or `path` do not exist. Return ------ JSON.STRLEN returns by recursive descent an array of integer replies for each path, the array's length, or `nil`, if the matching JSON value is not a string. For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Examples -------- ``` 127.0.0.1:6379> JSON.SET doc $ '{"a":"foo", "nested": {"a": "hello"}, "nested2": {"a": 31}}' OK 127.0.0.1:6379> JSON.STRLEN doc $..a 1) (integer) 3 2) (integer) 5 3) (nil) ``` See also -------- [`JSON.ARRLEN`](../json.arrlen) | [`JSON.ARRINSERT`](../json.arrinsert) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json) redis ZINTERSTORE ZINTERSTORE =========== ``` ZINTERSTORE ``` Syntax ``` ZINTERSTORE destination numkeys key [key ...] [WEIGHTS weight [weight ...]] [AGGREGATE <SUM | MIN | MAX>] ``` Available since: 2.0.0 Time complexity: O(N\*K)+O(M\*log(M)) worst case with N being the smallest input sorted set, K being the number of input sorted sets and M being the number of elements in the resulting sorted set. ACL categories: `@write`, `@sortedset`, `@slow`, Computes the intersection of `numkeys` sorted sets given by the specified keys, and stores the result in `destination`. It is mandatory to provide the number of input keys (`numkeys`) before passing the input keys and the other (optional) arguments. By default, the resulting score of an element is the sum of its scores in the sorted sets where it exists. Because intersection requires an element to be a member of every given sorted set, this results in the score of every element in the resulting sorted set to be equal to the number of input sorted sets. For a description of the `WEIGHTS` and `AGGREGATE` options, see [`ZUNIONSTORE`](../zunionstore). If `destination` already exists, it is overwritten. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of elements in the resulting sorted set at `destination`. Examples -------- ``` ZADD zset1 1 "one" ZADD zset1 2 "two" ZADD zset2 1 "one" ZADD zset2 2 "two" ZADD zset2 3 "three" ZINTERSTORE out 2 zset1 zset2 WEIGHTS 2 3 ZRANGE out 0 -1 WITHSCORES ``` redis BZPOPMAX BZPOPMAX ======== ``` BZPOPMAX ``` Syntax ``` BZPOPMAX key [key ...] timeout ``` Available since: 5.0.0 Time complexity: O(log(N)) with N being the number of elements in the sorted set. ACL categories: `@write`, `@sortedset`, `@fast`, `@blocking`, `BZPOPMAX` is the blocking variant of the sorted set [`ZPOPMAX`](../zpopmax) primitive. It is the blocking version because it blocks the connection when there are no members to pop from any of the given sorted sets. A member with the highest score is popped from first sorted set that is non-empty, with the given keys being checked in the order that they are given. The `timeout` argument is interpreted as a double value specifying the maximum number of seconds to block. A timeout of zero can be used to block indefinitely. See the [BZPOPMIN documentation](../bzpopmin) for the exact semantics, since `BZPOPMAX` is identical to [`BZPOPMIN`](../bzpopmin) with the only difference being that it pops members with the highest scores instead of popping the ones with the lowest scores. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): specifically: * A `nil` multi-bulk when no element could be popped and the timeout expired. * A three-element multi-bulk with the first element being the name of the key where a member was popped, the second element is the popped member itself, and the third element is the score of the popped element. Examples -------- ``` redis> DEL zset1 zset2 (integer) 0 redis> ZADD zset1 0 a 1 b 2 c (integer) 3 redis> BZPOPMAX zset1 zset2 0 1) "zset1" 2) "c" 3) "2" ``` History ------- * Starting with Redis version 6.0.0: `timeout` is interpreted as a double instead of an integer. redis CF.INFO CF.INFO ======= ``` CF.INFO ``` Syntax ``` CF.INFO key ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(1) Return information about `key` ### Parameters * **key**: Name of the key to restore Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) with information of the filter. @example ``` redis> CF.INFO cf 1) Size 2) (integer) 1080 3) Number of buckets 4) (integer) 512 5) Number of filter 6) (integer) 1 7) Number of items inserted 8) (integer) 0 9) Number of items deleted 10) (integer) 0 11) Bucket size 12) (integer) 2 13) Expansion rate 14) (integer) 1 15) Max iteration 16) (integer) 20 ``` redis TDIGEST.CDF TDIGEST.CDF =========== ``` TDIGEST.CDF ``` Syntax ``` TDIGEST.CDF key value [value ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.4.0](https://redis.io/docs/stack/bloom) Time complexity: O(1) Returns, for each input value, an estimation of the fraction (floating-point) of (observations smaller than the given value + half the observations equal to the given value). Multiple fractions can be retrieved in a signle call. Required arguments ------------------ `key` is key name for an existing t-digest sketch. `value` is value for which the CDF (Cumulative Distribution Function) should be retrieved. Return value ------------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) - the command returns an array of floating-points populated with fraction\_1, fraction\_2, ..., fraction\_N. All values are 'nan' if the sketch is empty. Examples -------- ``` redis> TDIGEST.CREATE t COMPRESSION 1000 OK redis> TDIGEST.ADD t 1 2 2 3 3 3 4 4 4 4 5 5 5 5 5 OK redis> TDIGEST.CDF t 0 1 2 3 4 5 6 1) "0" 2) "0.033333333333333333" 3) "0.13333333333333333" 4) "0.29999999999999999" 5) "0.53333333333333333" 6) "0.83333333333333337" 7) "1" ```
programming_docs
redis CLIENT CLIENT ====== ``` CLIENT GETREDIR ``` Syntax ``` CLIENT GETREDIR ``` Available since: 6.0.0 Time complexity: O(1) ACL categories: `@slow`, `@connection`, This command returns the client ID we are redirecting our [tracking](https://redis.io/topics/client-side-caching) notifications to. We set a client to redirect to when using [`CLIENT TRACKING`](../client-tracking) to enable tracking. However in order to avoid forcing client libraries implementations to remember the ID notifications are redirected to, this command exists in order to improve introspection and allow clients to check later if redirection is active and towards which client ID. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the ID of the client we are redirecting the notifications to. The command returns `-1` if client tracking is not enabled, or `0` if client tracking is enabled but we are not redirecting the notifications to any client. redis BRPOP BRPOP ===== ``` BRPOP ``` Syntax ``` BRPOP key [key ...] timeout ``` Available since: 2.0.0 Time complexity: O(N) where N is the number of provided keys. ACL categories: `@write`, `@list`, `@slow`, `@blocking`, `BRPOP` is a blocking list pop primitive. It is the blocking version of [`RPOP`](../rpop) because it blocks the connection when there are no elements to pop from any of the given lists. An element is popped from the tail of the first list that is non-empty, with the given keys being checked in the order that they are given. See the [BLPOP documentation](../blpop) for the exact semantics, since `BRPOP` is identical to [`BLPOP`](../blpop) with the only difference being that it pops elements from the tail of a list instead of popping from the head. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): specifically: * A `nil` multi-bulk when no element could be popped and the timeout expired. * A two-element multi-bulk with the first element being the name of the key where an element was popped and the second element being the value of the popped element. Examples -------- ``` redis> DEL list1 list2 (integer) 0 redis> RPUSH list1 a b c (integer) 3 redis> BRPOP list1 list2 0 1) "list1" 2) "c" ``` History ------- * Starting with Redis version 6.0.0: `timeout` is interpreted as a double instead of an integer. redis MEMORY MEMORY ====== ``` MEMORY DOCTOR ``` Syntax ``` MEMORY DOCTOR ``` Available since: 4.0.0 Time complexity: O(1) ACL categories: `@slow`, The `MEMORY DOCTOR` command reports about different memory-related issues that the Redis server experiences, and advises about possible remedies. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings) redis SAVE SAVE ==== ``` SAVE ``` Syntax ``` SAVE ``` Available since: 1.0.0 Time complexity: O(N) where N is the total number of keys in all databases ACL categories: `@admin`, `@slow`, `@dangerous`, The `SAVE` commands performs a **synchronous** save of the dataset producing a *point in time* snapshot of all the data inside the Redis instance, in the form of an RDB file. You almost never want to call `SAVE` in production environments where it will block all the other clients. Instead usually [`BGSAVE`](../bgsave) is used. However in case of issues preventing Redis to create the background saving child (for instance errors in the fork(2) system call), the `SAVE` command can be a good last resort to perform the dump of the latest dataset. Please refer to the [persistence documentation](https://redis.io/topics/persistence) for detailed information. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): The commands returns OK on success. redis CLUSTER CLUSTER ======= ``` CLUSTER RESET ``` Syntax ``` CLUSTER RESET [HARD | SOFT] ``` Available since: 3.0.0 Time complexity: O(N) where N is the number of known nodes. The command may execute a FLUSHALL as a side effect. ACL categories: `@admin`, `@slow`, `@dangerous`, Reset a Redis Cluster node, in a more or less drastic way depending on the reset type, that can be **hard** or **soft**. Note that this command **does not work for masters if they hold one or more keys**, in that case to completely reset a master node keys must be removed first, e.g. by using [`FLUSHALL`](../flushall) first, and then `CLUSTER RESET`. Effects on the node: 1. All the other nodes in the cluster are forgotten. 2. All the assigned / open slots are reset, so the slots-to-nodes mapping is totally cleared. 3. If the node is a replica it is turned into an (empty) master. Its dataset is flushed, so at the end the node will be an empty master. 4. **Hard reset only**: a new Node ID is generated. 5. **Hard reset only**: `currentEpoch` and `configEpoch` vars are set to 0. 6. The new configuration is persisted on disk in the node cluster configuration file. This command is mainly useful to re-provision a Redis Cluster node in order to be used in the context of a new, different cluster. The command is also extensively used by the Redis Cluster testing framework in order to reset the state of the cluster every time a new test unit is executed. If no reset type is specified, the default is **soft**. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if the command was successful. Otherwise an error is returned. redis FAILOVER FAILOVER ======== ``` FAILOVER ``` Syntax ``` FAILOVER [TO host port [FORCE]] [ABORT] [TIMEOUT milliseconds] ``` Available since: 6.2.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, This command will start a coordinated failover between the currently-connected-to master and one of its replicas. The failover is not synchronous, instead a background task will handle coordinating the failover. It is designed to limit data loss and unavailability of the cluster during the failover. This command is analogous to the [`CLUSTER FAILOVER`](../cluster-failover) command for non-clustered Redis and is similar to the failover support provided by sentinel. The specific details of the default failover flow are as follows: 1. The master will internally start a `CLIENT PAUSE WRITE`, which will pause incoming writes and prevent the accumulation of new data in the replication stream. 2. The master will monitor its replicas, waiting for a replica to indicate that it has fully consumed the replication stream. If the master has multiple replicas, it will only wait for the first replica to catch up. 3. The master will then demote itself to a replica. This is done to prevent any dual master scenarios. NOTE: The master will not discard its data, so it will be able to rollback if the replica rejects the failover request in the next step. 4. The previous master will send a special PSYNC request to the target replica, `PSYNC FAILOVER`, instructing the target replica to become a master. 5. Once the previous master receives acknowledgement the `PSYNC FAILOVER` was accepted it will unpause its clients. If the PSYNC request is rejected, the master will abort the failover and return to normal. The field `master_failover_state` in `INFO replication` can be used to track the current state of the failover, which has the following values: * `no-failover`: There is no ongoing coordinated failover. * `waiting-for-sync`: The master is waiting for the replica to catch up to its replication offset. * `failover-in-progress`: The master has demoted itself, and is attempting to hand off ownership to a target replica. If the previous master had additional replicas attached to it, they will continue replicating from it as chained replicas. You will need to manually execute a [`REPLICAOF`](../replicaof) on these replicas to start replicating directly from the new master. Optional arguments ------------------ The following optional arguments exist to modify the behavior of the failover flow: * `TIMEOUT` *milliseconds* -- This option allows specifying a maximum time a master will wait in the `waiting-for-sync` state before aborting the failover attempt and rolling back. This is intended to set an upper bound on the write outage the Redis cluster can experience. Failovers typically happen in less than a second, but could take longer if there is a large amount of write traffic or the replica is already behind in consuming the replication stream. If this value is not specified, the timeout can be considered to be "infinite". * `TO` *HOST* *PORT* -- This option allows designating a specific replica, by its host and port, to failover to. The master will wait specifically for this replica to catch up to its replication offset, and then failover to it. * `FORCE` -- If both the `TIMEOUT` and `TO` options are set, the force flag can also be used to designate that that once the timeout has elapsed, the master should failover to the target replica instead of rolling back. This can be used for a best-effort attempt at a failover without data loss, but limiting write outage. NOTE: The master will always rollback if the `PSYNC FAILOVER` request is rejected by the target replica. Failover abort -------------- The failover command is intended to be safe from data loss and corruption, but can encounter some scenarios it can not automatically remediate from and may get stuck. For this purpose, the `FAILOVER ABORT` command exists, which will abort an ongoing failover and return the master to its normal state. The command has no side effects if issued in the `waiting-for-sync` state but can introduce multi-master scenarios in the `failover-in-progress` state. If a multi-master scenario is encountered, you will need to manually identify which master has the latest data and designate it as the master and have the other replicas. NOTE: [`REPLICAOF`](../replicaof) is disabled while a failover is in progress, this is to prevent unintended interactions with the failover that might cause data loss. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if the command was accepted and a coordinated failover is in progress. An error if the operation cannot be executed. redis JSON.MGET JSON.MGET ========= ``` JSON.MGET ``` Syntax ``` JSON.MGET key [key ...] path ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: O(M\*N) when path is evaluated to a single value where M is the number of keys and N is the size of the value, O(N1+N2+...+Nm) when path is evaluated to multiple values where m is the number of keys and Ni is the size of the i-th key Return the values at `path` from multiple `key` arguments [Examples](#examples) Required arguments ------------------ `key` is key to parse. Returns `null` for nonexistent keys. Optional arguments ------------------ `path` is JSONPath to specify. Default is root `$`. Returns `null` for nonexistent paths. Return ------ JSON.MGET returns an array of bulk string replies specified as the JSON serialization of the value at each key's path. For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Examples -------- **Return the values at `path` from multiple `key` arguments** Create two JSON documents. ``` redis> JSON.SET doc1 $ '{"a":1, "b": 2, "nested": {"a": 3}, "c": null}' OK redis> JSON.SET doc2 $ '{"a":4, "b": 5, "nested": {"a": 6}, "c": null}' OK ``` Get values from all arguments in the documents. ``` redis> JSON.MGET doc1 doc2 $..a 1) "[1,3]" 2) "[4,6]" ``` See also -------- [`JSON.SET`](../json.set) | [`JSON.GET`](../json.get) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json) redis PFADD PFADD ===== ``` PFADD ``` Syntax ``` PFADD key [element [element ...]] ``` Available since: 2.8.9 Time complexity: O(1) to add every element. ACL categories: `@write`, `@hyperloglog`, `@fast`, Adds all the element arguments to the HyperLogLog data structure stored at the variable name specified as first argument. As a side effect of this command the HyperLogLog internals may be updated to reflect a different estimation of the number of unique items added so far (the cardinality of the set). If the approximated cardinality estimated by the HyperLogLog changed after executing the command, `PFADD` returns 1, otherwise 0 is returned. The command automatically creates an empty HyperLogLog structure (that is, a Redis String of a specified length and with a given encoding) if the specified key does not exist. To call the command without elements but just the variable name is valid, this will result into no operation performed if the variable already exists, or just the creation of the data structure if the key does not exist (in the latter case 1 is returned). For an introduction to HyperLogLog data structure check the [`PFCOUNT`](../pfcount) command page. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: * 1 if at least 1 HyperLogLog internal register was altered. 0 otherwise. Examples -------- ``` PFADD hll a b c d e f g PFCOUNT hll ``` redis CONFIG CONFIG ====== ``` CONFIG REWRITE ``` Syntax ``` CONFIG REWRITE ``` Available since: 2.8.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, The `CONFIG REWRITE` command rewrites the `redis.conf` file the server was started with, applying the minimal changes needed to make it reflect the configuration currently used by the server, which may be different compared to the original one because of the use of the [`CONFIG SET`](../config-set) command. The rewrite is performed in a very conservative way: * Comments and the overall structure of the original redis.conf are preserved as much as possible. * If an option already exists in the old redis.conf file, it will be rewritten at the same position (line number). * If an option was not already present, but it is set to its default value, it is not added by the rewrite process. * If an option was not already present, but it is set to a non-default value, it is appended at the end of the file. * Non used lines are blanked. For instance if you used to have multiple `save` directives, but the current configuration has fewer or none as you disabled RDB persistence, all the lines will be blanked. CONFIG REWRITE is also able to rewrite the configuration file from scratch if the original one no longer exists for some reason. However if the server was started without a configuration file at all, the CONFIG REWRITE will just return an error. Atomic rewrite process ---------------------- In order to make sure the redis.conf file is always consistent, that is, on errors or crashes you always end with the old file, or the new one, the rewrite is performed with a single `write(2)` call that has enough content to be at least as big as the old file. Sometimes additional padding in the form of comments is added in order to make sure the resulting file is big enough, and later the file gets truncated to remove the padding at the end. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` when the configuration was rewritten properly. Otherwise an error is returned. redis ZREMRANGEBYSCORE ZREMRANGEBYSCORE ================ ``` ZREMRANGEBYSCORE ``` Syntax ``` ZREMRANGEBYSCORE key min max ``` Available since: 1.2.0 Time complexity: O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements removed by the operation. ACL categories: `@write`, `@sortedset`, `@slow`, Removes all elements in the sorted set stored at `key` with a score between `min` and `max` (inclusive). Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of elements removed. Examples -------- ``` ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZREMRANGEBYSCORE myzset -inf (2 ZRANGE myzset 0 -1 WITHSCORES ``` redis RESTORE RESTORE ======= ``` RESTORE ``` Syntax ``` RESTORE key ttl serialized-value [REPLACE] [ABSTTL] [IDLETIME seconds] [FREQ frequency] ``` Available since: 2.6.0 Time complexity: O(1) to create the new key and additional O(N\*M) to reconstruct the serialized value, where N is the number of Redis objects composing the value and M their average size. For small string values the time complexity is thus O(1)+O(1\*M) where M is small, so simply O(1). However for sorted set values the complexity is O(N\*M\*log(N)) because inserting values into sorted sets is O(log(N)). ACL categories: `@keyspace`, `@write`, `@slow`, `@dangerous`, Create a key associated with a value that is obtained by deserializing the provided serialized value (obtained via [`DUMP`](../dump)). If `ttl` is 0 the key is created without any expire, otherwise the specified expire time (in milliseconds) is set. If the `ABSTTL` modifier was used, `ttl` should represent an absolute [Unix timestamp](http://en.wikipedia.org/wiki/Unix_time) (in milliseconds) in which the key will expire. For eviction purposes, you may use the `IDLETIME` or `FREQ` modifiers. See [`OBJECT`](../object) for more information. `RESTORE` will return a "Target key name is busy" error when `key` already exists unless you use the `REPLACE` modifier. `RESTORE` checks the RDB version and data checksum. If they don't match an error is returned. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): The command returns OK on success. Examples -------- ``` redis> DEL mykey 0 redis> RESTORE mykey 0 "\n\x17\x17\x00\x00\x00\x12\x00\x00\x00\x03\x00\ x00\xc0\x01\x00\x04\xc0\x02\x00\x04\xc0\x03\x00\ xff\x04\x00u#<\xc0;.\xe9\xdd" OK redis> TYPE mykey list redis> LRANGE mykey 0 -1 1) "1" 2) "2" 3) "3" ``` History ------- * Starting with Redis version 3.0.0: Added the `REPLACE` modifier. * Starting with Redis version 5.0.0: Added the `ABSTTL` modifier. * Starting with Redis version 5.0.0: Added the `IDLETIME` and `FREQ` options. redis GRAPH.RO_QUERY GRAPH.RO\_QUERY =============== ``` GRAPH.RO_QUERY ``` Syntax ``` GRAPH.RO_QUERY graph query [TIMEOUT timeout] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Graph 2.2.8](https://redis.io/docs/stack/graph) Time complexity: Executes a given read only query against a specified graph. Arguments: `Graph name, Query, Timeout [optional]` Returns: [Result set](https://redis.io/docs/stack/graph/design/result_structure) for a read only query or an error if a write query was given. ``` GRAPH.RO\_QUERY us\_government "MATCH (p:president)-[:born]->(:state {name:'Hawaii'}) RETURN p" ``` Query-level timeouts can be set as described in [the configuration section](https://redis.io/docs/stack/graph/configuration#timeout). redis EXPIREAT EXPIREAT ======== ``` EXPIREAT ``` Syntax ``` EXPIREAT key unix-time-seconds [NX | XX | GT | LT] ``` Available since: 1.2.0 Time complexity: O(1) ACL categories: `@keyspace`, `@write`, `@fast`, `EXPIREAT` has the same effect and semantic as [`EXPIRE`](../expire), but instead of specifying the number of seconds representing the TTL (time to live), it takes an absolute [Unix timestamp](http://en.wikipedia.org/wiki/Unix_time) (seconds since January 1, 1970). A timestamp in the past will delete the key immediately. Please for the specific semantics of the command refer to the documentation of [`EXPIRE`](../expire). Background ---------- `EXPIREAT` was introduced in order to convert relative timeouts to absolute timeouts for the AOF persistence mode. Of course, it can be used directly to specify that a given key should expire at a given time in the future. Options ------- The `EXPIREAT` command supports a set of options: * `NX` -- Set expiry only when the key has no expiry * `XX` -- Set expiry only when the key has an existing expiry * `GT` -- Set expiry only when the new expiry is greater than current one * `LT` -- Set expiry only when the new expiry is less than current one A non-volatile key is treated as an infinite TTL for the purpose of `GT` and `LT`. The `GT`, `LT` and `NX` options are mutually exclusive. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: * `1` if the timeout was set. * `0` if the timeout was not set. e.g. key doesn't exist, or operation skipped due to the provided arguments. Examples -------- ``` SET mykey "Hello" EXISTS mykey EXPIREAT mykey 1293840000 EXISTS mykey ``` History ------- * Starting with Redis version 7.0.0: Added options: `NX`, `XX`, `GT` and `LT`.
programming_docs
redis ZRANDMEMBER ZRANDMEMBER =========== ``` ZRANDMEMBER ``` Syntax ``` ZRANDMEMBER key [count [WITHSCORES]] ``` Available since: 6.2.0 Time complexity: O(N) where N is the number of members returned ACL categories: `@read`, `@sortedset`, `@slow`, When called with just the `key` argument, return a random element from the sorted set value stored at `key`. If the provided `count` argument is positive, return an array of **distinct elements**. The array's length is either `count` or the sorted set's cardinality ([`ZCARD`](../zcard)), whichever is lower. If called with a negative `count`, the behavior changes and the command is allowed to return the **same element multiple times**. In this case, the number of returned elements is the absolute value of the specified `count`. The optional `WITHSCORES` modifier changes the reply so it includes the respective scores of the randomly selected elements from the sorted set. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): without the additional `count` argument, the command returns a Bulk Reply with the randomly selected element, or `nil` when `key` does not exist. [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): when the additional `count` argument is passed, the command returns an array of elements, or an empty array when `key` does not exist. If the `WITHSCORES` modifier is used, the reply is a list elements and their scores from the sorted set. Examples -------- ``` ZADD dadi 1 uno 2 due 3 tre 4 quattro 5 cinque 6 sei ZRANDMEMBER dadi ZRANDMEMBER dadi ZRANDMEMBER dadi -5 WITHSCORES ``` Specification of the behavior when count is passed -------------------------------------------------- When the `count` argument is a positive value this command behaves as follows: * No repeated elements are returned. * If `count` is bigger than the cardinality of the sorted set, the command will only return the whole sorted set without additional elements. * The order of elements in the reply is not truly random, so it is up to the client to shuffle them if needed. When the `count` is a negative value, the behavior changes as follows: * Repeating elements are possible. * Exactly `count` elements, or an empty array if the sorted set is empty (non-existing key), are always returned. * The order of elements in the reply is truly random. redis XTRIM XTRIM ===== ``` XTRIM ``` Syntax ``` XTRIM key <MAXLEN | MINID> [= | ~] threshold [LIMIT count] ``` Available since: 5.0.0 Time complexity: O(N), with N being the number of evicted entries. Constant times are very small however, since entries are organized in macro nodes containing multiple entries that can be released with a single deallocation. ACL categories: `@write`, `@stream`, `@slow`, `XTRIM` trims the stream by evicting older entries (entries with lower IDs) if needed. Trimming the stream can be done using one of these strategies: * `MAXLEN`: Evicts entries as long as the stream's length exceeds the specified `threshold`, where `threshold` is a positive integer. * `MINID`: Evicts entries with IDs lower than `threshold`, where `threshold` is a stream ID. For example, this will trim the stream to exactly the latest 1000 items: ``` XTRIM mystream MAXLEN 1000 ``` Whereas in this example, all entries that have an ID lower than 649085820-0 will be evicted: ``` XTRIM mystream MINID 649085820 ``` By default, or when provided with the optional `=` argument, the command performs exact trimming. Depending on the strategy, exact trimming means: * `MAXLEN`: the trimmed stream's length will be exactly the minimum between its original length and the specified `threshold`. * `MINID`: the oldest ID in the stream will be exactly the maximum between its original oldest ID and the specified `threshold`. Nearly exact trimming --------------------- Because exact trimming may require additional effort from the Redis server, the optional `~` argument can be provided to make it more efficient. For example: ``` XTRIM mystream MAXLEN ~ 1000 ``` The `~` argument between the `MAXLEN` strategy and the `threshold` means that the user is requesting to trim the stream so its length is **at least** the `threshold`, but possibly slightly more. In this case, Redis will stop trimming early when performance can be gained (for example, when a whole macro node in the data structure can't be removed). This makes trimming much more efficient, and it is usually what you want, although after trimming, the stream may have few tens of additional entries over the `threshold`. Another way to control the amount of work done by the command when using the `~`, is the `LIMIT` clause. When used, it specifies the maximal `count` of entries that will be evicted. When `LIMIT` and `count` aren't specified, the default value of 100 \* the number of entries in a macro node will be implicitly used as the `count`. Specifying the value 0 as `count` disables the limiting mechanism entirely. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): The number of entries deleted from the stream. Examples -------- ``` XADD mystream * field1 A field2 B field3 C field4 D XTRIM mystream MAXLEN 2 XRANGE mystream - + ``` History ------- * Starting with Redis version 6.2.0: Added the `MINID` trimming strategy and the `LIMIT` option. redis HVALS HVALS ===== ``` HVALS ``` Syntax ``` HVALS key ``` Available since: 2.0.0 Time complexity: O(N) where N is the size of the hash. ACL categories: `@read`, `@hash`, `@slow`, Returns all values in the hash stored at `key`. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of values in the hash, or an empty list when `key` does not exist. Examples -------- ``` HSET myhash field1 "Hello" HSET myhash field2 "World" HVALS myhash ``` redis STRLEN STRLEN ====== ``` STRLEN ``` Syntax ``` STRLEN key ``` Available since: 2.2.0 Time complexity: O(1) ACL categories: `@read`, `@string`, `@fast`, Returns the length of the string value stored at `key`. An error is returned when `key` holds a non-string value. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the length of the string at `key`, or `0` when `key` does not exist. Examples -------- ``` SET mykey "Hello world" STRLEN mykey STRLEN nonexisting ``` redis DECRBY DECRBY ====== ``` DECRBY ``` Syntax ``` DECRBY key decrement ``` Available since: 1.0.0 Time complexity: O(1) ACL categories: `@write`, `@string`, `@fast`, Decrements the number stored at `key` by `decrement`. If the key does not exist, it is set to `0` before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer. This operation is limited to 64 bit signed integers. See [`INCR`](../incr) for extra information on increment/decrement operations. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the value of `key` after the decrement Examples -------- ``` SET mykey "10" DECRBY mykey 3 ``` redis XSETID XSETID ====== ``` XSETID ``` Syntax ``` XSETID key last-id [ENTRIESADDED entries-added] [MAXDELETEDID max-deleted-id] ``` Available since: 5.0.0 Time complexity: O(1) ACL categories: `@write`, `@stream`, `@fast`, The `XSETID` command is an internal command. It is used by a Redis master to replicate the last delivered ID of streams. History ------- * Starting with Redis version 7.0.0: Added the `entries_added` and `max_deleted_entry_id` arguments. redis PERSIST PERSIST ======= ``` PERSIST ``` Syntax ``` PERSIST key ``` Available since: 2.2.0 Time complexity: O(1) ACL categories: `@keyspace`, `@write`, `@fast`, Remove the existing timeout on `key`, turning the key from *volatile* (a key with an expire set) to *persistent* (a key that will never expire as no timeout is associated). Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: * `1` if the timeout was removed. * `0` if `key` does not exist or does not have an associated timeout. Examples -------- ``` SET mykey "Hello" EXPIRE mykey 10 TTL mykey PERSIST mykey TTL mykey ``` redis JSON.DEL JSON.DEL ======== ``` JSON.DEL ``` Syntax ``` JSON.DEL key [path] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: O(N) when path is evaluated to a single value where N is the size of the deleted value, O(N) when path is evaluated to multiple values, where N is the size of the key Delete a value [Examples](#examples) Required arguments ------------------ `key` is key to modify. Optional arguments ------------------ `path` is JSONPath to specify. Default is root `$`. Nonexisting paths are ignored. Note Deleting an object's root is equivalent to deleting the key from Redis. Return ------ JSON.DEL returns an integer reply specified as the number of paths deleted (0 or more). For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Examples -------- **Delete a value** Create a JSON document. ``` 127.0.0.1:6379> JSON.SET doc $ '{"a": 1, "nested": {"a": 2, "b": 3}}' OK ``` Delete specified values. ``` 127.0.0.1:6379> JSON.DEL doc $..a (integer) 2 ``` Get the updated document. ``` 127.0.0.1:6379> JSON.GET doc $ "[{\"nested\":{\"b\":3}}]" ``` See also -------- [`JSON.SET`](../json.set) | [`JSON.ARRLEN`](../json.arrlen) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json) redis CLIENT CLIENT ====== ``` CLIENT INFO ``` Syntax ``` CLIENT INFO ``` Available since: 6.2.0 Time complexity: O(1) ACL categories: `@slow`, `@connection`, The command returns information and statistics about the current client connection in a mostly human readable format. The reply format is identical to that of [`CLIENT LIST`](../client-list), and the content consists only of information about the current client. Examples -------- ``` CLIENT INFO ``` Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): a unique string, as described at the [`CLIENT LIST`](../client-list) page, for the current client. redis CLIENT CLIENT ====== ``` CLIENT UNPAUSE ``` Syntax ``` CLIENT UNPAUSE ``` Available since: 6.2.0 Time complexity: O(N) Where N is the number of paused clients ACL categories: `@admin`, `@slow`, `@dangerous`, `@connection`, `CLIENT UNPAUSE` is used to resume command processing for all clients that were paused by [`CLIENT PAUSE`](../client-pause). Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): The command returns `OK` redis JSON.STRAPPEND JSON.STRAPPEND ============== ``` JSON.STRAPPEND ``` Syntax ``` JSON.STRAPPEND key [path] value ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: O(1) when path is evaluated to a single value, O(N) when path is evaluated to multiple values, where N is the size of the key Append the `json-string` values to the string at `path` [Examples](#examples) Required arguments ------------------ `key` is key to modify. `value` is value to append to one or more strings. About using strings with JSON commands To specify a string as an array value to append, wrap the quoted string with an additional set of single quotes. Example: `'"silver"'`. For more detailed use, see [Examples](#examples). Optional arguments ------------------ `path` is JSONPath to specify. Default is root `$`. Return value ------------ JSON.STRAPPEND returns an array of integer replies for each path, the string's new length, or `nil`, if the matching JSON value is not a string. For more information about replies, see [Redis serialization protocol specification](https://redis.io/docs/reference/protocol-spec). Examples -------- ``` 127.0.0.1:6379> JSON.SET doc $ '{"a":"foo", "nested": {"a": "hello"}, "nested2": {"a": 31}}' OK 127.0.0.1:6379> JSON.STRAPPEND doc $..a '"baz"' 1) (integer) 6 2) (integer) 8 3) (nil) 127.0.0.1:6379> JSON.GET doc $ "[{\"a\":\"foobaz\",\"nested\":{\"a\":\"hellobaz\"},\"nested2\":{\"a\":31}}]" ``` See also -------- `JSON.ARRAPEND` | [`JSON.ARRINSERT`](../json.arrinsert) Related topics -------------- * [RedisJSON](https://redis.io/docs/stack/json) * [Index and search JSON documents](https://redis.io/docs/stack/search/indexing_json) redis CLIENT CLIENT ====== ``` CLIENT KILL ``` Syntax ``` CLIENT KILL <ip:port | <[ID client-id] | [TYPE <NORMAL | MASTER | SLAVE | REPLICA | PUBSUB>] | [USER username] | [ADDR ip:port] | [LADDR ip:port] | [SKIPME <YES | NO>] [[ID client-id] | [TYPE <NORMAL | MASTER | SLAVE | REPLICA | PUBSUB>] | [USER username] | [ADDR ip:port] | [LADDR ip:port] | [SKIPME <YES | NO>] ...]>> ``` Available since: 2.4.0 Time complexity: O(N) where N is the number of client connections ACL categories: `@admin`, `@slow`, `@dangerous`, `@connection`, The `CLIENT KILL` command closes a given client connection. This command support two formats, the old format: ``` CLIENT KILL addr:port ``` The `ip:port` should match a line returned by the [`CLIENT LIST`](../client-list) command (`addr` field). The new format: ``` CLIENT KILL <filter> <value> ... ... <filter> <value> ``` With the new form it is possible to kill clients by different attributes instead of killing just by address. The following filters are available: * `CLIENT KILL ADDR ip:port`. This is exactly the same as the old three-arguments behavior. * `CLIENT KILL LADDR ip:port`. Kill all clients connected to specified local (bind) address. * `CLIENT KILL ID client-id`. Allows to kill a client by its unique `ID` field. Client `ID`'s are retrieved using the [`CLIENT LIST`](../client-list) command. * `CLIENT KILL TYPE type`, where *type* is one of `normal`, `master`, `replica` and `pubsub`. This closes the connections of **all the clients** in the specified class. Note that clients blocked into the [`MONITOR`](../monitor) command are considered to belong to the `normal` class. * `CLIENT KILL USER username`. Closes all the connections that are authenticated with the specified [ACL](https://redis.io/topics/acl) username, however it returns an error if the username does not map to an existing ACL user. * `CLIENT KILL SKIPME yes/no`. By default this option is set to `yes`, that is, the client calling the command will not get killed, however setting this option to `no` will have the effect of also killing the client calling the command. It is possible to provide multiple filters at the same time. The command will handle multiple filters via logical AND. For example: ``` CLIENT KILL addr 127.0.0.1:12345 type pubsub ``` is valid and will kill only a pubsub client with the specified address. This format containing multiple filters is rarely useful currently. When the new form is used the command no longer returns `OK` or an error, but instead the number of killed clients, that may be zero. CLIENT KILL and Redis Sentinel ------------------------------ Recent versions of Redis Sentinel (Redis 2.8.12 or greater) use CLIENT KILL in order to kill clients when an instance is reconfigured, in order to force clients to perform the handshake with one Sentinel again and update its configuration. Notes ----- Due to the single-threaded nature of Redis, it is not possible to kill a client connection while it is executing a command. From the client point of view, the connection can never be closed in the middle of the execution of a command. However, the client will notice the connection has been closed only when the next command is sent (and results in network error). Return ------ When called with the three arguments format: [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if the connection exists and has been closed When called with the filter / value format: [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of clients killed. History ------- * Starting with Redis version 2.8.12: Added new filter format. * Starting with Redis version 2.8.12: `ID` option. * Starting with Redis version 3.2.0: Added `master` type in for `TYPE` option. * Starting with Redis version 5.0.0: Replaced `slave` `TYPE` with `replica`. `slave` still supported for backward compatibility. * Starting with Redis version 6.2.0: `LADDR` option. redis SREM SREM ==== ``` SREM ``` Syntax ``` SREM key member [member ...] ``` Available since: 1.0.0 Time complexity: O(N) where N is the number of members to be removed. ACL categories: `@write`, `@set`, `@fast`, Remove the specified members from the set stored at `key`. Specified members that are not a member of this set are ignored. If `key` does not exist, it is treated as an empty set and this command returns `0`. An error is returned when the value stored at `key` is not a set. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of members that were removed from the set, not including non existing members. Examples -------- ``` SADD myset "one" SADD myset "two" SADD myset "three" SREM myset "one" SREM myset "four" SMEMBERS myset ``` History ------- * Starting with Redis version 2.4.0: Accepts multiple `member` arguments. redis TDIGEST.RESET TDIGEST.RESET ============= ``` TDIGEST.RESET ``` Syntax ``` TDIGEST.RESET key ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 2.4.0](https://redis.io/docs/stack/bloom) Time complexity: O(1) Resets a t-digest sketch: empty the sketch and re-initializes it. Required arguments ------------------ `key` is key name for an existing t-digest sketch. Return value ------------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) - `OK` if executed correctly, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) otherwise. Examples -------- ``` redis> TDIGEST.RESET t OK ``` redis HGET HGET ==== ``` HGET ``` Syntax ``` HGET key field ``` Available since: 2.0.0 Time complexity: O(1) ACL categories: `@read`, `@hash`, `@fast`, Returns the value associated with `field` in the hash stored at `key`. Return ------ [Bulk string reply](https://redis.io/docs/reference/protocol-spec#resp-bulk-strings): the value associated with `field`, or `nil` when `field` is not present in the hash or `key` does not exist. Examples -------- ``` HSET myhash field1 "foo" HGET myhash field1 HGET myhash field2 ``` redis TS.REVRANGE TS.REVRANGE =========== ``` TS.REVRANGE ``` Syntax ``` TS.REVRANGE key fromTimestamp toTimestamp [LATEST] [FILTER_BY_TS TS...] [FILTER_BY_VALUE min max] [COUNT count] [[ALIGN align] AGGREGATION aggregator bucketDuration [BUCKETTIMESTAMP bt] [EMPTY]] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [TimeSeries 1.4.0](https://redis.io/docs/stack/timeseries) Time complexity: O(n/m+k) where n = Number of data points, m = Chunk size (data points per chunk), k = Number of data points that are in the requested range Query a range in reverse direction [Examples](#examples) Required arguments ------------------ `key` is the key name for the time series. `fromTimestamp` is start timestamp for the range query (integer UNIX timestamp in milliseconds) or `-` to denote the timestamp of the earliest sample in the time series. `toTimestamp` is end timestamp for the range query (integer UNIX timestamp in milliseconds) or `+` to denote the timestamp of the latest sample in the time series. **Note:** When the time series is a compaction, the last compacted value may aggregate raw values with timestamp beyond `toTimestamp`. That is because `toTimestamp` limits only the timestamp of the compacted value, which is the start time of the raw bucket that was compacted. Optional arguments ------------------ `LATEST` (since RedisTimeSeries v1.8) is used when a time series is a compaction. With `LATEST`, TS.REVRANGE also reports the compacted value of the latest, possibly partial, bucket, given that this bucket's start time falls within `[fromTimestamp, toTimestamp]`. Without `LATEST`, TS.REVRANGE does not report the latest, possibly partial, bucket. When a time series is not a compaction, `LATEST` is ignored. The data in the latest bucket of a compaction is possibly partial. A bucket is *closed* and compacted only upon arrival of a new sample that *opens* a new *latest* bucket. There are cases, however, when the compacted value of the latest, possibly partial, bucket is also required. In such a case, use `LATEST`. `FILTER_BY_TS ts...` (since RedisTimeSeries v1.6) filters samples by a list of specific timestamps. A sample passes the filter if its exact timestamp is specified and falls within `[fromTimestamp, toTimestamp]`. `FILTER_BY_VALUE min max` (since RedisTimeSeries v1.6) filters samples by minimum and maximum values. `COUNT count` limits the number of returned samples. `ALIGN align` (since RedisTimeSeries v1.6) is a time bucket alignment control for `AGGREGATION`. It controls the time bucket timestamps by changing the reference timestamp on which a bucket is defined. Values include: * `start` or `-`: The reference timestamp will be the query start interval time (`fromTimestamp`) which can't be `-` * `end` or `+`: The reference timestamp will be the query end interval time (`toTimestamp`) which can't be `+` * A specific timestamp: align the reference timestamp to a specific time **NOTE:** When not provided, alignment is set to `0`. `AGGREGATION aggregator bucketDuration` aggregates samples into time buckets, where: * `aggregator` takes one of the following aggregation types: | `aggregator` | Description | | --- | --- | | `avg` | Arithmetic mean of all values | | `sum` | Sum of all values | | `min` | Minimum value | | `max` | Maximum value | | `range` | Difference between the maximum and the minimum value | | `count` | Number of values | | `first` | Value with lowest timestamp in the bucket | | `last` | Value with highest timestamp in the bucket | | `std.p` | Population standard deviation of the values | | `std.s` | Sample standard deviation of the values | | `var.p` | Population variance of the values | | `var.s` | Sample variance of the values | | `twa` | Time-weighted average over the bucket's timeframe (since RedisTimeSeries v1.8) | * `bucketDuration` is duration of each bucket, in milliseconds. Without `ALIGN`, bucket start times are multiples of `bucketDuration`. With `ALIGN align`, bucket start times are multiples of `bucketDuration` with remainder `align % bucketDuration`. The first bucket start time is less than or equal to `fromTimestamp`. `[BUCKETTIMESTAMP bt]` (since RedisTimeSeries v1.8) controls how bucket timestamps are reported. | `bt` | Timestamp reported for each bucket | | --- | --- | | `-` or `low` | the bucket's start time (default) | | `+` or `high` | the bucket's end time | | `~` or `mid` | the bucket's mid time (rounded down if not an integer) | `[EMPTY]` (since RedisTimeSeries v1.8) is a flag, which, when specified, reports aggregations also for empty buckets. | `aggregator` | Value reported for each empty bucket | | --- | --- | | `sum`, `count` | `0` | | `last` | The value of the last sample before the bucket's start. `NaN` when no such sample. | | `twa` | Average value over the bucket's timeframe based on linear interpolation of the last sample before the bucket's start and the first sample after the bucket's end. `NaN` when no such samples. | | `min`, `max`, `range`, `avg`, `first`, `std.p`, `std.s` | `NaN` | Regardless of the values of `fromTimestamp` and `toTimestamp`, no data is reported for buckets that end before the earliest sample or begin after the latest sample in the time series. Return value ------------ * [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of ([Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings)) pairs representing (timestamp, value(double)) * [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors) (e.g., on invalid filter value) Complexity ---------- TS.REVRANGE complexity can be improved in the future by using binary search to find the start of the range, which makes this `O(Log(n/m)+k*m)`. But, because `m` is small, you can disregard it and look at the operation as `O(Log(n)+k)`. Examples -------- **Filter results by timestamp or sample value** Consider a metric where acceptable values are between -100 and 100, and the value 9999 is used as an indication of bad measurement. ``` 127.0.0.1:6379> TS.CREATE temp:TLV LABELS type temp location TLV OK 127.0.0.1:6379> TS.MADD temp:TLV 1000 30 temp:TLV 1010 35 temp:TLV 1020 9999 temp:TLV 1030 40 1) (integer) 1000 2) (integer) 1010 3) (integer) 1020 4) (integer) 1030 ``` Now, retrieve all values except out-of-range values. ``` TS.REVRANGE temp:TLV - + FILTER\_BY\_VALUE -100 100 1) 1) (integer) 1030 2) 40 2) 1) (integer) 1010 2) 35 3) 1) (integer) 1000 2) 30 ``` Now, retrieve the average value, while ignoring out-of-range values. ``` TS.REVRANGE temp:TLV - + FILTER\_BY\_VALUE -100 100 AGGREGATION avg 1000 1) 1) (integer) 1000 2) 35 ``` **Align aggregation buckets** To demonstrate alignment, let’s create a stock and add prices at three different timestamps. ``` 127.0.0.1:6379> TS.CREATE stock:A LABELS type stock name A OK 127.0.0.1:6379> TS.MADD stock:A 1000 100 stock:A 1010 110 stock:A 1020 120 1) (integer) 1000 2) (integer) 1010 3) (integer) 1020 127.0.0.1:6379> TS.MADD stock:A 2000 200 stock:A 2010 210 stock:A 2020 220 1) (integer) 2000 2) (integer) 2010 3) (integer) 2020 127.0.0.1:6379> TS.MADD stock:A 3000 300 stock:A 3010 310 stock:A 3020 320 1) (integer) 3000 2) (integer) 3010 3) (integer) 3020 ``` Next, aggregate without using `ALIGN`, defaulting to alignment 0. ``` 127.0.0.1:6379> TS.REVRANGE stock:A - + AGGREGATION min 20 1) 1) (integer) 3020 2) 320 2) 1) (integer) 3000 2) 300 3) 1) (integer) 2020 2) 220 4) 1) (integer) 2000 2) 200 5) 1) (integer) 1020 2) 120 6) 1) (integer) 1000 2) 100 ``` And now set `ALIGN` to 10 to have a bucket start at time 10, and align all the buckets with a 20 milliseconds duration. ``` 127.0.0.1:6379> TS.REVRANGE stock:A - + ALIGN 10 AGGREGATION min 20 1) 1) (integer) 3010 2) 310 2) 1) (integer) 2990 2) 300 3) 1) (integer) 2010 2) 210 4) 1) (integer) 1990 2) 200 5) 1) (integer) 1010 2) 110 6) 1) (integer) 990 2) 100 ``` When the start timestamp for the range query is explicitly stated (not `-`), you can set ALIGN to that time by setting align to `-` or to `start`. ``` 127.0.0.1:6379> TS.REVRANGE stock:A 5 + ALIGN - AGGREGATION min 20 1) 1) (integer) 3005 2) 310 2) 1) (integer) 2985 2) 300 3) 1) (integer) 2005 2) 210 4) 1) (integer) 1985 2) 200 5) 1) (integer) 1005 2) 110 6) 1) (integer) 985 2) 100 ``` Similarly, when the end timestamp for the range query is explicitly stated, you can set ALIGN to that time by setting align to `+` or to `end`. See also -------- [`TS.RANGE`](../ts.range) | [`TS.MRANGE`](../ts.mrange) | [`TS.MREVRANGE`](../ts.mrevrange) Related topics -------------- [RedisTimeSeries](https://redis.io/docs/stack/timeseries)
programming_docs
redis XINFO XINFO ===== ``` XINFO STREAM ``` Syntax ``` XINFO STREAM key [FULL [COUNT count]] ``` Available since: 5.0.0 Time complexity: O(1) ACL categories: `@read`, `@stream`, `@slow`, This command returns information about the stream stored at `<key>`. The informative details provided by this command are: * **length**: the number of entries in the stream (see [`XLEN`](../xlen)) * **radix-tree-keys**: the number of keys in the underlying radix data structure * **radix-tree-nodes**: the number of nodes in the underlying radix data structure * **groups**: the number of consumer groups defined for the stream * **last-generated-id**: the ID of the least-recently entry that was added to the stream * **max-deleted-entry-id**: the maximal entry ID that was deleted from the stream * **entries-added**: the count of all entries added to the stream during its lifetime * **first-entry**: the ID and field-value tuples of the first entry in the stream * **last-entry**: the ID and field-value tuples of the last entry in the stream The optional `FULL` modifier provides a more verbose reply. When provided, the `FULL` reply includes an **entries** array that consists of the stream entries (ID and field-value tuples) in ascending order. Furthermore, **groups** is also an array, and for each of the consumer groups it consists of the information reported by [`XINFO GROUPS`](../xinfo-groups) and [`XINFO CONSUMERS`](../xinfo-consumers). The `COUNT` option can be used to limit the number of stream and PEL entries that are returned (The first `<count>` entries are returned). The default `COUNT` is 10 and a `COUNT` of 0 means that all entries will be returned (execution time may be long if the stream has a lot of entries). Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a list of informational bits Examples -------- Default reply: ``` > XINFO STREAM mystream 1) "length" 2) (integer) 2 3) "radix-tree-keys" 4) (integer) 1 5) "radix-tree-nodes" 6) (integer) 2 7) "last-generated-id" 8) "1638125141232-0" 9) "max-deleted-entry-id" 10) "0-0" 11) "entries-added" 12) (integer) 2 13) "groups" 14) (integer) 1 15) "first-entry" 16) 1) "1638125133432-0" 2) 1) "message" 2) "apple" 17) "last-entry" 18) 1) "1638125141232-0" 2) 1) "message" 2) "banana" ``` Full reply: ``` > XADD mystream * foo bar "1638125133432-0" > XADD mystream * foo bar2 "1638125141232-0" > XGROUP CREATE mystream mygroup 0-0 OK > XREADGROUP GROUP mygroup Alice COUNT 1 STREAMS mystream > 1) 1) "mystream" 2) 1) 1) "1638125133432-0" 2) 1) "foo" 2) "bar" > XINFO STREAM mystream FULL 1) "length" 2) (integer) 2 3) "radix-tree-keys" 4) (integer) 1 5) "radix-tree-nodes" 6) (integer) 2 7) "last-generated-id" 8) "1638125141232-0" 9) "max-deleted-entry-id" 10) "0-0" 11) "entries-added" 12) (integer) 2 13) "entries" 14) 1) 1) "1638125133432-0" 2) 1) "foo" 2) "bar" 2) 1) "1638125141232-0" 2) 1) "foo" 2) "bar2" 15) "groups" 16) 1) 1) "name" 2) "mygroup" 3) "last-delivered-id" 4) "1638125133432-0" 5) "entries-read" 6) (integer) 1 7) "lag" 8) (integer) 1 9) "pel-count" 10) (integer) 1 11) "pending" 12) 1) 1) "1638125133432-0" 2) "Alice" 3) (integer) 1638125153423 4) (integer) 1 13) "consumers" 14) 1) 1) "name" 2) "Alice" 3) "seen-time" 4) (integer) 1638125133422 5) "active-time" 6) (integer) 1638125133432 7) "pel-count" 8) (integer) 1 9) "pending" 10) 1) 1) "1638125133432-0" 2) (integer) 1638125133432 3) (integer) 1 ``` History ------- * Starting with Redis version 6.0.0: Added the `FULL` modifier. * Starting with Redis version 7.0.0: Added the `max-deleted-entry-id`, `entries-added`, `recorded-first-entry-id`, `entries-read` and `lag` fields * Starting with Redis version 7.2.0: Added the `active-time` field, and changed the meaning of `seen-time`. redis XACK XACK ==== ``` XACK ``` Syntax ``` XACK key group id [id ...] ``` Available since: 5.0.0 Time complexity: O(1) for each message ID processed. ACL categories: `@write`, `@stream`, `@fast`, The `XACK` command removes one or multiple messages from the *Pending Entries List* (PEL) of a stream consumer group. A message is pending, and as such stored inside the PEL, when it was delivered to some consumer, normally as a side effect of calling [`XREADGROUP`](../xreadgroup), or when a consumer took ownership of a message calling [`XCLAIM`](../xclaim). The pending message was delivered to some consumer but the server is yet not sure it was processed at least once. So new calls to [`XREADGROUP`](../xreadgroup) to grab the messages history for a consumer (for instance using an ID of 0), will return such message. Similarly the pending message will be listed by the [`XPENDING`](../xpending) command, that inspects the PEL. Once a consumer *successfully* processes a message, it should call `XACK` so that such message does not get processed again, and as a side effect, the PEL entry about this message is also purged, releasing memory from the Redis server. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: The command returns the number of messages successfully acknowledged. Certain message IDs may no longer be part of the PEL (for example because they have already been acknowledged), and XACK will not count them as successfully acknowledged. Examples -------- ``` redis> XACK mystream mygroup 1526569495631-0 (integer) 1 ``` redis LPUSHX LPUSHX ====== ``` LPUSHX ``` Syntax ``` LPUSHX key element [element ...] ``` Available since: 2.2.0 Time complexity: O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments. ACL categories: `@write`, `@list`, `@fast`, Inserts specified values at the head of the list stored at `key`, only if `key` already exists and holds a list. In contrary to [`LPUSH`](../lpush), no operation will be performed when `key` does not yet exist. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the length of the list after the push operation. Examples -------- ``` LPUSH mylist "World" LPUSHX mylist "Hello" LPUSHX myotherlist "Hello" LRANGE mylist 0 -1 LRANGE myotherlist 0 -1 ``` History ------- * Starting with Redis version 4.0.0: Accepts multiple `element` arguments. redis CLIENT CLIENT ====== ``` CLIENT TRACKINGINFO ``` Syntax ``` CLIENT TRACKINGINFO ``` Available since: 6.2.0 Time complexity: O(1) ACL categories: `@slow`, `@connection`, The command returns information about the current client connection's use of the [server assisted client side caching](https://redis.io/topics/client-side-caching) feature. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): a list of tracking information sections and their respective values, specifically: * **flags**: A list of tracking flags used by the connection. The flags and their meanings are as follows: + `off`: The connection isn't using server assisted client side caching. + `on`: Server assisted client side caching is enabled for the connection. + `bcast`: The client uses broadcasting mode. + `optin`: The client does not cache keys by default. + `optout`: The client caches keys by default. + `caching-yes`: The next command will cache keys (exists only together with `optin`). + `caching-no`: The next command won't cache keys (exists only together with `optout`). + `noloop`: The client isn't notified about keys modified by itself. + `broken_redirect`: The client ID used for redirection isn't valid anymore. * **redirect**: The client ID used for notifications redirection, or -1 when none. * **prefixes**: A list of key prefixes for which notifications are sent to the client. redis GRAPH.DELETE GRAPH.DELETE ============ ``` GRAPH.DELETE ``` Syntax ``` GRAPH.DELETE graph ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Graph 1.0.0](https://redis.io/docs/stack/graph) Time complexity: Completely removes the graph and all of its entities. Arguments: `Graph name` Returns: `String indicating if operation succeeded or failed.` ``` GRAPH.DELETE us\_government ``` Note: To delete a node from the graph (not the entire graph), execute a `MATCH` query and pass the alias to the `DELETE` clause: ``` GRAPH.QUERY DEMO_GRAPH "MATCH (x:Y {propname: propvalue}) DELETE x" ``` WARNING: When you delete a node, all of the node's incoming/outgoing relationships are also removed. redis OBJECT OBJECT ====== ``` OBJECT IDLETIME ``` Syntax ``` OBJECT IDLETIME key ``` Available since: 2.2.3 Time complexity: O(1) ACL categories: `@keyspace`, `@read`, `@slow`, This command returns the time in seconds since the last access to the value stored at `<key>`. The command is only available when the `maxmemory-policy` configuration directive is not set to one of the LFU policies. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) The idle time in seconds. redis CF.EXISTS CF.EXISTS ========= ``` CF.EXISTS ``` Syntax ``` CF.EXISTS key item ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(k), where k is the number of sub-filters Check if an `item` exists in a Cuckoo Filter `key` ### Parameters * **key**: The name of the filter * **item**: The item to check for Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - where "1" value means the item may exist in the filter, and a "0" value means it does not exist in the filter. Examples -------- ``` redis> CF.EXISTS cf item1 (integer) 1 ``` ``` redis> CF.EXISTS cf item_new (integer) 0 ``` redis FT.CREATE FT.CREATE ========= ``` FT.CREATE ``` Syntax ``` FT.CREATE index [ON HASH | JSON] [PREFIX count prefix [prefix ...]] [FILTER {filter}] [LANGUAGE default_lang] [LANGUAGE_FIELD lang_attribute] [SCORE default_score] [SCORE_FIELD score_attribute] [PAYLOAD_FIELD payload_attribute] [MAXTEXTFIELDS] [TEMPORARY seconds] [NOOFFSETS] [NOHL] [NOFIELDS] [NOFREQS] [STOPWORDS count [stopword ...]] [SKIPINITIALSCAN] SCHEMA field_name [AS alias] TEXT | TAG | NUMERIC | GEO | VECTOR [ SORTABLE [UNF]] [NOINDEX] [ field_name [AS alias] TEXT | TAG | NUMERIC | GEO | VECTOR [ SORTABLE [UNF]] [NOINDEX] ...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.0.0](https://redis.io/docs/stack/search) Time complexity: O(K) at creation where K is the number of fields, O(N) if scanning the keyspace is triggered, where N is the number of keys in the keyspace Description ----------- Create an index with the given specification. For usage, see [Examples](#examples). Required arguments ------------------ `index` is index name to create. If it exists, the old specification is overwritten. `SCHEMA {identifier} AS {attribute} {attribute type} {options...` after the SCHEMA keyword, declares which fields to index: * `{identifier}` for hashes, is a field name within the hash. For JSON, the identifier is a JSON Path expression. * `AS {attribute}` defines the attribute associated to the identifier. For example, you can use this feature to alias a complex JSONPath expression with more memorable (and easier to type) name. Field types are: * `TEXT` - Allows full-text search queries against the value in this attribute. * `TAG` - Allows exact-match queries, such as categories or primary keys, against the value in this attribute. For more information, see [Tag Fields](https://redis.io/docs/stack/search/reference/tags). * `NUMERIC` - Allows numeric range queries against the value in this attribute. See [query syntax docs](https://redis.io/docs/stack/search/reference/query_syntax) for details on how to use numeric ranges. * `GEO` - Allows geographic range queries against the value in this attribute. The value of the attribute must be a string containing a longitude (first) and latitude separated by a comma. * `VECTOR` - Allows vector similarity queries against the value in this attribute. For more information, see [Vector Fields](https://redis.io/docs/stack/search/reference/vectors). Field options are: * `SORTABLE` - `NUMERIC`, `TAG`, `TEXT`, or `GEO` attributes can have an optional **SORTABLE** argument. As the user [sorts the results by the value of this attribute](https://redis.io/docs/stack/search/reference/sorting), the results are available with very low latency. Note that his adds memory overhead, so consider not declaring it on large text attributes. You can sort an attribute without the `SORTABLE` option, but the latency is not as good as with `SORTABLE`. * `UNF` - By default, for hashes (not with JSON) `SORTABLE` applies a normalization to the indexed value (characters set to lowercase, removal of diacritics). When using the unnormalized form (UNF), you can disable the normalization and keep the original form of the value. With JSON, `UNF` is implicit with `SORTABLE` (normalization is disabled). * `NOSTEM` - Text attributes can have the NOSTEM argument that disables stemming when indexing its values. This may be ideal for things like proper names. * `NOINDEX` - Attributes can have the `NOINDEX` option, which means they will not be indexed. This is useful in conjunction with `SORTABLE`, to create attributes whose update using PARTIAL will not cause full reindexing of the document. If an attribute has NOINDEX and doesn't have SORTABLE, it will just be ignored by the index. * `PHONETIC {matcher}` - Declaring a text attribute as `PHONETIC` will perform phonetic matching on it in searches by default. The obligatory {matcher} argument specifies the phonetic algorithm and language used. The following matchers are supported: + `dm:en` - Double metaphone for English + `dm:fr` - Double metaphone for French + `dm:pt` - Double metaphone for Portuguese + `dm:es` - Double metaphone for SpanishFor more information, see [Phonetic Matching](https://redis.io/docs/stack/search/reference/phonetic_matching). * `WEIGHT {weight}` for `TEXT` attributes, declares the importance of this attribute when calculating result accuracy. This is a multiplication factor, and defaults to 1 if not specified. * `SEPARATOR {sep}` for `TAG` attributes, indicates how the text contained in the attribute is to be split into individual tags. The default is `,`. The value must be a single character. * `CASESENSITIVE` for `TAG` attributes, keeps the original letter cases of the tags. If not specified, the characters are converted to lowercase. * `WITHSUFFIXTRIE` for `TEXT` and `TAG` attributes, keeps a suffix trie with all terms which match the suffix. It is used to optimize `contains` (*foo*) and `suffix` (\*foo) queries. Otherwise, a brute-force search on the trie is performed. If suffix trie exists for some fields, these queries will be disabled for other fields. Optional arguments ------------------ `ON {data_type}` currently supports HASH (default) and JSON. To index JSON, you must have the [RedisJSON](https://redis.io/docs/stack/json) module installed. `PREFIX {count} {prefix}` tells the index which keys it should index. You can add several prefixes to index. Because the argument is optional, the default is `*` (all keys). `FILTER {filter}` is a filter expression with the full RediSearch aggregation expression language. It is possible to use `@__key` to access the key that was just added/changed. A field can be used to set field name by passing `'FILTER @indexName=="myindexname"'`. `LANGUAGE {default_lang}` if set, indicates the default language for documents in the index. Default is English. `LANGUAGE_FIELD {lang_attribute}` is document attribute set as the document language. A stemmer is used for the supplied language during indexing. If an unsupported language is sent, the command returns an error. The supported languages are Arabic, Basque, Catalan, Danish, Dutch, English, Finnish, French, German, Greek, Hungarian, Indonesian, Irish, Italian, Lithuanian, Nepali, Norwegian, Portuguese, Romanian, Russian, Spanish, Swedish, Tamil, Turkish, and Chinese. When adding Chinese language documents, set `LANGUAGE chinese` for the indexer to properly tokenize the terms. If you use the default language, then search terms are extracted based on punctuation characters and whitespace. The Chinese language tokenizer makes use of a segmentation algorithm (via [Friso](https://github.com/lionsoul2014/friso)), which segments text and checks it against a predefined dictionary. See [Stemming](https://redis.io/docs/stack/search/reference/stemming) for more information. `SCORE {default_score}` is default score for documents in the index. Default score is 1.0. `SCORE_FIELD {score_attribute}` is document attribute that you use as the document rank based on the user ranking. Ranking must be between 0.0 and 1.0. If not set, the default score is 1. `PAYLOAD_FIELD {payload_attribute}` is document attribute that you use as a binary safe payload string to the document that can be evaluated at query time by a custom scoring function or retrieved to the client. `MAXTEXTFIELDS` forces RediSearch to encode indexes as if there were more than 32 text attributes, which allows you to add additional attributes (beyond 32) using [`FT.ALTER`](../ft.alter). For efficiency, RediSearch encodes indexes differently if they are created with less than 32 text attributes. `NOOFFSETS` does not store term offsets for documents. It saves memory, but does not allow exact searches or highlighting. It implies `NOHL`. `TEMPORARY {seconds}` creates a lightweight temporary index that expires after a specified period of inactivity, in seconds. The internal idle timer is reset whenever the index is searched or added to. Because such indexes are lightweight, you can create thousands of such indexes without negative performance implications and, therefore, you should consider using `SKIPINITIALSCAN` to avoid costly scanning. Warning When temporary indexes expire, they drop all the records associated with them. [`FT.DROPINDEX`](../ft.dropindex) was introduced with a default of not deleting docs and a `DD` flag that enforced deletion. However, for temporary indexes, documents are deleted along with the index. Historically, RediSearch used an FT.ADD command, which made a connection between the document and the index. Then, FT.DROP, also a hystoric command, deleted documents by default. In version 2.x, RediSearch indexes hashes and JSONs, and the dependency between the index and documents no longer exists. `NOHL` conserves storage space and memory by disabling highlighting support. If set, the corresponding byte offsets for term positions are not stored. `NOHL` is also implied by `NOOFFSETS`. `NOFIELDS` does not store attribute bits for each term. It saves memory, but it does not allow filtering by specific attributes. `NOFREQS` avoids saving the term frequencies in the index. It saves memory, but does not allow sorting based on the frequencies of a given term within the document. `STOPWORDS {count}` sets the index with a custom stopword list, to be ignored during indexing and search time. `{count}` is the number of stopwords, followed by a list of stopword arguments exactly the length of `{count}`. If not set, FT.CREATE takes the default list of stopwords. If `{count}` is set to 0, the index does not have stopwords. `SKIPINITIALSCAN` if set, does not scan and index. **Notes:** * **Attribute number limits:** RediSearch supports up to 1024 attributes per schema, out of which at most 128 can be TEXT attributes. On 32 bit builds, at most 64 attributes can be TEXT attributes. The more attributes you have, the larger your index, as each additional 8 attributes require one extra byte per index record to encode. You can always use the `NOFIELDS` option and not encode attribute information into the index, for saving space, if you do not need filtering by text attributes. This will still allow filtering by numeric and geo attributes. * **Running in clustered databases:** When having several indices in a clustered database, you need to make sure the documents you want to index reside on the same shard as the index. You can achieve this by having your documents tagged by the index name. ``` 127.0.0.1:6379> HSET doc:1{idx} ... 127.0.0.1:6379> FT.CREATE idx ... PREFIX 1 doc: ... ``` When Running RediSearch in a clustered database, you can span the index across shards using [RSCoordinator](https://github.com/RedisLabsModules/RSCoordinator). In this case the above does not apply. Return ------ FT.CREATE returns a simple string reply `OK` if executed correctly, or an error reply otherwise. Examples -------- **Create an index** Create an index that stores the title, publication date, and categories of blog post hashes whose keys start with `blog:post:` (for example, `blog:post:1`). ``` 127.0.0.1:6379> FT.CREATE idx ON HASH PREFIX 1 blog:post: SCHEMA title TEXT SORTABLE published\_at NUMERIC SORTABLE category TAG SORTABLE OK ``` Index the `sku` attribute from a hash as both a `TAG` and as `TEXT`: ``` 127.0.0.1:6379> FT.CREATE idx ON HASH PREFIX 1 blog:post: SCHEMA sku AS sku\_text TEXT sku AS sku\_tag TAG SORTABLE ``` Index two different hashes, one containing author data and one containing books, in the same index: ``` 127.0.0.1:6379> FT.CREATE author-books-idx ON HASH PREFIX 2 author:details: book:details: SCHEMA author\_id TAG SORTABLE author\_ids TAG title TEXT name TEXT ``` In this example, keys for author data use the key pattern `author:details:<id>` while keys for book data use the pattern `book:details:<id>`. **Index a JSON document using a JSON Path expression** Index authors whose names start with G. ``` 127.0.0.1:6379> FT.CREATE g-authors-idx ON HASH PREFIX 1 author:details FILTER 'startswith(@name, "G")' SCHEMA name TEXT ``` Index only books that have a subtitle. ``` 127.0.0.1:6379> FT.CREATE subtitled-books-idx ON HASH PREFIX 1 book:details FILTER '@subtitle != ""' SCHEMA title TEXT ``` Index books that have a "categories" attribute where each category is separated by a `;` character. ``` 127.0.0.1:6379> FT.CREATE books-idx ON HASH PREFIX 1 book:details FILTER SCHEMA title TEXT categories TAG SEPARATOR ";" ``` Index a JSON document using a JSON Path expression. ``` 127.0.0.1:6379> FT.CREATE idx ON JSON SCHEMA $.title AS title TEXT $.categories AS categories TAG ``` See also -------- [`FT.ALTER`](../ft.alter) | [`FT.DROPINDEX`](../ft.dropindex) Related topics -------------- * [RediSearch](https://redis.io/docs/stack/search) * [RedisJSON](https://redis.io/docs/stack/json) * [Friso](https://github.com/lionsoul2014/friso) * [Stemming](https://redis.io/docs/stack/search/reference/stemming) * [Phonetic Matching](https://redis.io/docs/stack/search/reference/phonetic_matching) * [RSCoordinator](https://github.com/RedisLabsModules/RSCoordinator) History ------- * Starting with Redis version 2.0.0: Added `PAYLOAD_FIELD` argument for backward support of `FT.SEARCH` deprecated `WITHPAYLOADS` argument * Starting with Redis version 2.0.0: Deprecated `PAYLOAD_FIELD` argument
programming_docs
redis SADD SADD ==== ``` SADD ``` Syntax ``` SADD key member [member ...] ``` Available since: 1.0.0 Time complexity: O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments. ACL categories: `@write`, `@set`, `@fast`, Add the specified members to the set stored at `key`. Specified members that are already a member of this set are ignored. If `key` does not exist, a new set is created before adding the specified members. An error is returned when the value stored at `key` is not a set. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the number of elements that were added to the set, not including all the elements already present in the set. Examples -------- ``` SADD myset "Hello" SADD myset "World" SADD myset "World" SMEMBERS myset ``` History ------- * Starting with Redis version 2.4.0: Accepts multiple `member` arguments. redis CLIENT CLIENT ====== ``` CLIENT SETNAME ``` Syntax ``` CLIENT SETNAME connection-name ``` Available since: 2.6.9 Time complexity: O(1) ACL categories: `@slow`, `@connection`, The `CLIENT SETNAME` command assigns a name to the current connection. The assigned name is displayed in the output of [`CLIENT LIST`](../client-list) so that it is possible to identify the client that performed a given connection. For instance when Redis is used in order to implement a queue, producers and consumers of messages may want to set the name of the connection according to their role. There is no limit to the length of the name that can be assigned if not the usual limits of the Redis string type (512 MB). However it is not possible to use spaces in the connection name as this would violate the format of the [`CLIENT LIST`](../client-list) reply. It is possible to entirely remove the connection name setting it to the empty string, that is not a valid connection name since it serves to this specific purpose. The connection name can be inspected using [`CLIENT GETNAME`](../client-getname). Every new connection starts without an assigned name. Tip: setting names to connections is a good way to debug connection leaks due to bugs in the application using Redis. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` if the connection name was successfully set. redis GEOADD GEOADD ====== ``` GEOADD ``` Syntax ``` GEOADD key [NX | XX] [CH] longitude latitude member [longitude latitude member ...] ``` Available since: 3.2.0 Time complexity: O(log(N)) for each item added, where N is the number of elements in the sorted set. ACL categories: `@write`, `@geo`, `@slow`, Adds the specified geospatial items (longitude, latitude, name) to the specified key. Data is stored into the key as a sorted set, in a way that makes it possible to query the items with the [`GEOSEARCH`](../geosearch) command. The command takes arguments in the standard format x,y so the longitude must be specified before the latitude. There are limits to the coordinates that can be indexed: areas very near to the poles are not indexable. The exact limits, as specified by EPSG:900913 / EPSG:3785 / OSGEO:41001 are the following: * Valid longitudes are from -180 to 180 degrees. * Valid latitudes are from -85.05112878 to 85.05112878 degrees. The command will report an error when the user attempts to index coordinates outside the specified ranges. **Note:** there is no **GEODEL** command because you can use [`ZREM`](../zrem) to remove elements. The Geo index structure is just a sorted set. GEOADD options -------------- `GEOADD` also provides the following options: * **XX**: Only update elements that already exist. Never add elements. * **NX**: Don't update already existing elements. Always add new elements. * **CH**: Modify the return value from the number of new elements added, to the total number of elements changed (CH is an abbreviation of *changed*). Changed elements are **new elements added** and elements already existing for which **the coordinates was updated**. So elements specified in the command line having the same score as they had in the past are not counted. Note: normally, the return value of `GEOADD` only counts the number of new elements added. Note: The **XX** and **NX** options are mutually exclusive. How does it work? ----------------- The way the sorted set is populated is using a technique called [Geohash](https://en.wikipedia.org/wiki/Geohash). Latitude and Longitude bits are interleaved to form a unique 52-bit integer. We know that a sorted set double score can represent a 52-bit integer without losing precision. This format allows for bounding box and radius querying by checking the 1+8 areas needed to cover the whole shape and discarding elements outside it. The areas are checked by calculating the range of the box covered, removing enough bits from the less significant part of the sorted set score, and computing the score range to query in the sorted set for each area. What Earth model does it use? ----------------------------- The model assumes that the Earth is a sphere since it uses the Haversine formula to calculate distance. This formula is only an approximation when applied to the Earth, which is not a perfect sphere. The introduced errors are not an issue when used, for example, by social networks and similar applications requiring this type of querying. However, in the worst case, the error may be up to 0.5%, so you may want to consider other systems for error-critical applications. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers), specifically: * When used without optional arguments, the number of elements added to the sorted set (excluding score updates). * If the `CH` option is specified, the number of elements that were changed (added or updated). Examples -------- ``` GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" GEODIST Sicily Palermo Catania GEORADIUS Sicily 15 37 100 km GEORADIUS Sicily 15 37 200 km ``` History ------- * Starting with Redis version 6.2.0: Added the `CH`, `NX` and `XX` options. redis LTRIM LTRIM ===== ``` LTRIM ``` Syntax ``` LTRIM key start stop ``` Available since: 1.0.0 Time complexity: O(N) where N is the number of elements to be removed by the operation. ACL categories: `@write`, `@list`, `@slow`, Trim an existing list so that it will contain only the specified range of elements specified. Both `start` and `stop` are zero-based indexes, where `0` is the first element of the list (the head), `1` the next element and so on. For example: `LTRIM foobar 0 2` will modify the list stored at `foobar` so that only the first three elements of the list will remain. `start` and `end` can also be negative numbers indicating offsets from the end of the list, where `-1` is the last element of the list, `-2` the penultimate element and so on. Out of range indexes will not produce an error: if `start` is larger than the end of the list, or `start > end`, the result will be an empty list (which causes `key` to be removed). If `end` is larger than the end of the list, Redis will treat it like the last element of the list. A common use of `LTRIM` is together with [`LPUSH`](../lpush) / [`RPUSH`](../rpush). For example: ``` LPUSH mylist someelement LTRIM mylist 0 99 ``` This pair of commands will push a new element on the list, while making sure that the list will not grow larger than 100 elements. This is very useful when using Redis to store logs for example. It is important to note that when used in this way `LTRIM` is an O(1) operation because in the average case just one element is removed from the tail of the list. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings) Examples -------- ``` RPUSH mylist "one" RPUSH mylist "two" RPUSH mylist "three" LTRIM mylist 1 -1 LRANGE mylist 0 -1 ``` redis FT.ALIASUPDATE FT.ALIASUPDATE ============== ``` FT.ALIASUPDATE ``` Syntax ``` FT.ALIASUPDATE alias index ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.0.0](https://redis.io/docs/stack/search) Time complexity: O(1) Add an alias to an index. If the alias is already associated with another index, FT.ALIASUPDATE removes the alias association with the previous index. [Examples](#examples) Required arguments ------------------ `alias index` is alias to be added to an index. Return ------ FT.ALIASUPDATE returns a simple string reply `OK` if executed correctly, or an error reply otherwise. Examples -------- **Update an index alias** Update the alias of an index. ``` 127.0.0.1:6379> FT.ALIASUPDATE alias idx OK ``` See also -------- [`FT.ALIASADD`](../ft.aliasadd) | [`FT.ALIASDEL`](../ft.aliasdel) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) redis TS.ADD TS.ADD ====== ``` TS.ADD ``` Syntax ``` TS.ADD key timestamp value [RETENTION retentionPeriod] [ENCODING [COMPRESSED|UNCOMPRESSED]] [CHUNK_SIZE size] [ON_DUPLICATE policy] [LABELS {label value}...] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [TimeSeries 1.0.0](https://redis.io/docs/stack/timeseries) Time complexity: O(M) when M is the amount of compaction rules or O(1) with no compaction Append a sample to a time series [Examples](#examples) Required arguments ------------------ `key` is key name for the time series. `timestamp` is (integer) UNIX sample timestamp in milliseconds or `*` to set the timestamp according to the server clock. `value` is (double) numeric data value of the sample. The double number should follow [RFC 7159](https://tools.ietf.org/html/rfc7159) (JSON standard). In particular, the parser rejects overly large values that do not fit in binary64. It does not accept NaN or infinite values. **Notes:** * When specified key does not exist, a new time series is created. if a [COMPACTION\_POLICY](https://redis.io/docs/stack/timeseries/configuration/#compaction_policy) configuration parameter is defined, compacted time series would be created as well. * If `timestamp` is older than the retention period compared to the maximum existing timestamp, the sample is discarded and an error is returned. * When adding a sample to a time series for which compaction rules are defined: + If all the original samples for an affected aggregated time bucket are available, the compacted value is recalculated based on the reported sample and the original samples. + If only a part of the original samples for an affected aggregated time bucket is available due to trimming caused in accordance with the time series RETENTION policy, the compacted value is recalculated based on the reported sample and the available original samples. + If the original samples for an affected aggregated time bucket are not available due to trimming caused in accordance with the time series RETENTION policy, the compacted value bucket is not updated. * Explicitly adding samples to a compacted time series (using `TS.ADD`, [`TS.MADD`](../ts.madd), [`TS.INCRBY`](../ts.incrby), or [`TS.DECRBY`](../ts.decrby)) may result in inconsistencies between the raw and the compacted data. The compaction process may override such samples. Optional arguments ------------------ The following arguments are optional because they can be set by [`TS.CREATE`](../ts.create). `RETENTION retentionPeriod` is maximum retention period, compared to the maximum existing timestamp, in milliseconds. Use it only if you are creating a new time series. It is ignored if you are adding samples to an existing time series. See `RETENTION` in [`TS.CREATE`](../ts.create). `ENCODING enc` specifies the series sample's encoding format. Use it only if you are creating a new time series. It is ignored if you are adding samples to an existing time series. See `ENCODING` in [`TS.CREATE`](../ts.create). `CHUNK_SIZE size` is memory size, in bytes, allocated for each data chunk. Use it only if you are creating a new time series. It is ignored if you are adding samples to an existing time series. See `CHUNK_SIZE` in [`TS.CREATE`](../ts.create). `ON_DUPLICATE policy` is overwrite key and database configuration for [DUPLICATE\_POLICY](https://redis.io/docs/stack/timeseries/configuration/#duplicate_policy), the policy for handling samples with identical timestamps. It is used with one of the following values: * `BLOCK`: ignore any newly reported value and reply with an error * `FIRST`: ignore any newly reported value * `LAST`: override with the newly reported value * `MIN`: only override if the value is lower than the existing value * `MAX`: only override if the value is higher than the existing value * `SUM`: If a previous sample exists, add the new sample to it so that the updated value is equal to (previous + new). If no previous sample exists, set the updated value equal to the new value. `LABELS {label value}...` is set of label-value pairs that represent metadata labels of the time series. Use it only if you are creating a new time series. It is ignored if you are adding samples to an existing time series. See `LABELS` in [`TS.CREATE`](../ts.create). **Notes:** * You can use this command to add data to a nonexisting time series in a single command. This is why `RETENTION`, `ENCODING`, `CHUNK_SIZE`, `ON_DUPLICATE`, and `LABELS` are optional arguments. * Setting `RETENTION` and `LABELS` introduces additional time complexity. Return value ------------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) - the timestamp of the upserted sample, or [Error reply](https://redis.io/docs/reference/protocol-spec#resp-errors). Complexity ---------- If a compaction rule exits on a time series, the performance of `TS.ADD` can be reduced. The complexity of `TS.ADD` is always `O(M)`, where `M` is the number of compaction rules or `O(1)` with no compaction. Examples -------- **Append a sample to a temperature time series** Create a temperature time series, set its retention to 1 year, and append a sample. ``` 127.0.0.1:6379> TS.ADD temperature:3:11 1548149183000 27 RETENTION 31536000000 (integer) 1548149183000 ``` **Note:** If a time series with such a name already exists, the sample is added, but the retention does not change. Add a sample to the time series, setting the sample's timestamp according to the server clock. ``` 127.0.0.1:6379> TS.ADD temperature:3:11 \* 30 (integer) 1662042954573 ``` See also -------- [`TS.CREATE`](../ts.create) Related topics -------------- [RedisTimeSeries](https://redis.io/docs/stack/timeseries) redis ZREVRANGEBYLEX ZREVRANGEBYLEX ============== ``` ZREVRANGEBYLEX (deprecated) ``` As of Redis version 6.2.0, this command is regarded as deprecated. It can be replaced by [`ZRANGE`](../zrange) with the `REV` and `BYLEX` arguments when migrating or writing new code. Syntax ``` ZREVRANGEBYLEX key max min [LIMIT offset count] ``` Available since: 2.8.9 Time complexity: O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)). ACL categories: `@read`, `@sortedset`, `@slow`, When all the elements in a sorted set are inserted with the same score, in order to force lexicographical ordering, this command returns all the elements in the sorted set at `key` with a value between `max` and `min`. Apart from the reversed ordering, `ZREVRANGEBYLEX` is similar to [`ZRANGEBYLEX`](../zrangebylex). Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of elements in the specified score range. Examples -------- ``` ZADD myzset 0 a 0 b 0 c 0 d 0 e 0 f 0 g ZREVRANGEBYLEX myzset [c - ZREVRANGEBYLEX myzset (c - ZREVRANGEBYLEX myzset (g [aaa ``` redis FT.CURSOR FT.CURSOR ========= ``` FT.CURSOR DEL ``` Syntax ``` FT.CURSOR DEL index cursor_id ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.1.0](https://redis.io/docs/stack/search) Time complexity: O(1) Delete a cursor [Examples](#examples) Required arguments ------------------ `index` is index name. `cursor_id` is id of the cursor. Returns ------- FT.CURSOR DEL returns a simple string reply `OK` if executed correctly, or an error reply otherwise. Examples -------- **Delete a cursor** ``` redis> FT.CURSOR DEL idx 342459320 OK ``` Check that the cursor is deleted. ``` 127.0.0.1:6379> FT.CURSOR DEL idx 342459320 (error) Cursor does not exist ``` See also -------- [`FT.CURSOR READ`](../ft.cursor-read) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) redis GRAPH.QUERY GRAPH.QUERY =========== ``` GRAPH.QUERY ``` Syntax ``` GRAPH.QUERY graph query [TIMEOUT timeout] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Graph 1.0.0](https://redis.io/docs/stack/graph) Time complexity: Executes the given query against a specified graph. Arguments: `Graph name, Query, Timeout [optional]` Returns: [Result set](https://redis.io/docs/stack/graph/design/result_structure) ### Queries and Parameterized Queries The execution plans of queries, both regular and parameterized, are cached (up to [CACHE\_SIZE](https://redis.io/docs/stack/graph/configuration/#cache_size) unique queries are cached). Therefore, it is recommended to use parametrized queries when executing many queries with the same pattern but different constants. Query-level timeouts can be set as described in [the configuration section](https://redis.io/docs/stack/graph/configuration#timeout). #### Query structure: `GRAPH.QUERY graph_name "query"` example: ``` GRAPH.QUERY us\_government "MATCH (p:president)-[:born]->(:state {name:'Hawaii'}) RETURN p" ``` #### Parametrized query structure: `GRAPH.QUERY graph_name "CYPHER param=val [param=val ...] query"` example: ``` GRAPH.QUERY us\_government "CYPHER state\_name='Hawaii' MATCH (p:president)-[:born]->(:state {name:$state\_name}) RETURN p" ``` ### Query language The syntax is based on [Cypher](http://www.opencypher.org/). [Most](https://redis.io/docs/stack/graph/cypher_support/) of the language is supported. RedisGraph-specific extensions are also described below. 1. [Clauses](#query-structure) 2. [Functions](#functions) ### Query structure * [MATCH](#match) * [OPTIONAL MATCH](#optional-match) * [WHERE](#where) * [RETURN](#return) * [ORDER BY](#order-by) * [SKIP](#skip) * [LIMIT](#limit) * [CREATE](#create) * [MERGE](#merge) * [DELETE](#delete) * [SET](#set) * [WITH](#with) * [UNION](#union) * [UNWIND](#unwind) * [FOREACH](#foreach) #### MATCH Match describes the relationship between queried entities, using ascii art to represent pattern(s) to match against. Nodes are represented by parentheses `()`, and Relationships are represented by brackets `[]`. Each graph entity node/relationship can contain an alias and a label/relationship type, but both can be left empty if necessary. Entity structure: `alias:label {filters}`. Alias, label/relationship type, and filters are all optional. Example: ``` (a:Actor)-[:ACT]->(m:Movie {title:"straight outta compton"}) ``` `a` is an alias for the source node, which we'll be able to refer to at different places within our query. `Actor` is the label under which this node is marked. `ACT` is the relationship type. `m` is an alias for the destination node. `Movie` destination node is of "type" movie. `{title:"straight outta compton"}` requires the node's title attribute to equal "straight outta compton". In this example, we're interested in actor entities which have the relation "act" with **the** entity representing the "straight outta compton" movie. It is possible to describe broader relationships by composing a multi-hop query such as: ``` (me {name:'swilly'})-[:FRIENDS\_WITH]->()-[:FRIENDS\_WITH]->(foaf) ``` Here we're interested in finding out who my friends' friends are. Nodes can have more than one relationship coming in or out of them, for instance: ``` (me {name:'swilly'})-[:VISITED]->(c:Country)<-[:VISITED]-(friend)<-[:FRIENDS\_WITH]-(me) ``` Here we're interested in knowing which of my friends have visited at least one country I've been to. ##### Variable length relationships Nodes that are a variable number of relationship→node hops away can be found using the following syntax: ``` -[:TYPE\*minHops..maxHops]-> ``` [`TYPE`](../type), `minHops` and `maxHops` are all optional and default to type agnostic, 1 and infinity, respectively. When no bounds are given the dots may be omitted. The dots may also be omitted when setting only one bound and this implies a fixed length pattern. Example: ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (charlie:Actor { name: 'Charlie Sheen' })-[:PLAYED\_WITH\*1..3]->(colleague:Actor) RETURN colleague" ``` Returns all actors related to 'Charlie Sheen' by 1 to 3 hops. ##### Bidirectional path traversal If a relationship pattern does not specify a direction, it will match regardless of which node is the source and which is the destination: ``` -[:TYPE]- ``` Example: ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (person\_a:Person)-[:KNOWS]-(person\_b:Person) RETURN person\_a, person\_b" ``` Returns all pairs of people connected by a `KNOWS` relationship. Note that each pair will be returned twice, once with each node in the `person_a` field and once in the `person_b` field. The syntactic sugar `(person_a)<-[:KNOWS]->(person_b)` will return the same results. The bracketed edge description can be omitted if all relations should be considered: `(person_a)--(person_b)`. ##### Named paths Named path variables are created by assigning a path in a MATCH clause to a single alias with the syntax: `MATCH named_path = (path)-[to]->(capture)` The named path includes all entities in the path, regardless of whether they have been explicitly aliased. Named paths can be accessed using [designated built-in functions](#path-functions) or returned directly if using a language-specific client. Example: ``` GRAPH.QUERY DEMO\_GRAPH "MATCH p=(charlie:Actor { name: 'Charlie Sheen' })-[:PLAYED\_WITH\*1..3]->(:Actor) RETURN nodes(p) as actors" ``` This query will produce all the paths matching the pattern contained in the named path `p`. All of these paths will share the same starting point, the actor node representing Charlie Sheen, but will otherwise vary in length and contents. Though the variable-length traversal and `(:Actor)` endpoint are not explicitly aliased, all nodes and edges traversed along the path will be included in `p`. In this case, we are only interested in the nodes of each path, which we'll collect using the built-in function `nodes()`. The returned value will contain, in order, Charlie Sheen, between 0 and 2 intermediate nodes, and the unaliased endpoint. ##### All shortest paths The `allShortestPaths` function returns all the shortest paths between a pair of entities. `allShortestPaths()` is a MATCH mode in which only the shortest paths matching all criteria are captured. Both the source and the target nodes must be bound in an earlier WITH-demarcated scope to invoke `allShortestPaths()`. A minimal length (must be 1) and maximal length (must be at least 1) for the search may be specified. Zero or more relationship types may be specified (e.g. [:R|Q\*1..3]). No property filters may be introduced in the pattern. `allShortestPaths()` can have any number of hops for its minimum and maximum, including zero. This number represents how many edges can be traversed in fulfilling the pattern, with a value of 0 entailing that the source node will be included in the returned path. Filters on properties are supported, and any number of labels may be specified. Example: ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (charlie:Actor {name: 'Charlie Sheen'}), (kevin:Actor {name: 'Kevin Bacon'}) WITH charlie, kevin MATCH p=allShortestPaths((charlie)-[:PLAYED\_WITH\*]->(kevin)) RETURN nodes(p) as actors" ``` This query will produce all paths of the minimum length connecting the actor node representing Charlie Sheen to the one representing Kevin Bacon. There are several 2-hop paths between the two actors, and all of these will be returned. The computation of paths then terminates, as we are not interested in any paths of length greater than 2. ##### Single-Pair minimal-weight bounded-cost bounded-length paths (Since RedisGraph v2.10) The `algo.SPpaths` procedure returns one, *n*, or all minimal-weight, [optionally] bounded-cost, [optionally] bounded-length distinct paths between a pair of entities. Each path is a sequence of distinct nodes connected by distinct edges. `algo.SPpaths()` is a MATCH mode in which only the paths matching all criteria are captured. Both the source and the target nodes must be bound in an earlier WITH-demarcated scope to invoke `algo.SPpaths()`. Input arguments: * A map containing: + `sourceNode`: Mandatory. Must be of type node + `targetNode`: Mandatory. Must be of type node + `relTypes`: Optional. Array of zero or more relationship types. A relationship must have one of these types to be part of the path. If not specified or empty: the path may contain any relationship. + `relDirection`: Optional. string. one of `'incoming'`, `'outgoing'`, `'both'`. If not specified: `'outgoing'`. + `pathCount`: Optional. Number of minimal-weight paths to retrieve. Non-negative integer. If not specified: 1 - `0`: retrieve all minimal-weight paths (all reported paths have the same weight) Order: 1st : minimal cost, 2nd: minimal length. - `1`: retrieve a single minimal-weight path When multiple equal-weight paths exist: (preferences: 1st : minimal cost, 2nd: minimal length) - *n* > 1: retrieve up to *n* minimal-weight paths (reported paths may have different weights) When multiple equal-weight paths exist: (preferences: 1st : minimal cost, 2nd: minimal length) + `weightProp`: Optional. If not specified: use the default weight: 1 for each relationship. The name of the property that represents the weight of each relationship (integer / float) If such property doesn’t exist, of if its value is not a positive numeric - use the default weight: 1 Note: when all weights are equal: minimal-weight ≡ shortest-path. + `costProp`: Optional. If not specified: use the default cost: 1 for each relationship. The name of the property that represents the cost of each relationship (integer / float) If such property doesn't exist, or if its value is not a positive numeric - use the default cost: 1 + `maxLen`: Optional. Maximal path length (number of relationships along the path). Positive integer. If not specified: no maximal length constraint. + `maxCost`: Optional. Positive numeric. If not specified: no maximal cost constraint. The maximal cumulative cost for the relationships along the path. Result: * Paths conforming to the input arguments. For each reported path: + `path` - the path + `pathWeight` - the path’s weight + `pathCost` - the path’s costTo retrieve additional information: + The path’s length can be retrieved with `length(path)` + An array of the nodes along the path can be retrieved with `nodes(path)` + The path’s first node can be retrieved with `nodes(path)[0]` + The path’s last node can be retrieved with `nodes(path)[-1]` + An array of the relationship's costs along the path can be retrieved with `[r in relationships(path) | r.cost]` where cost is the name of the cost property + An array of the relationship's weights along the path can be retrieved with `[r in relationships(path) | r.weight]` where weight is the name of the weight property Behavior in presence on multiple-edges: * multi-edges are two or more edges connecting the same pair of vertices (possibly with different weights and costs). * All matching edges are considered. Paths with identical vertices and different edges are different paths. The following are 3 different paths ('n1', 'n2', and 'n3' are nodes; 'e1', 'e2', 'e3', and 'e4' are edges): (n1)-[e1]-(n2)-[e2]-(n3), (n1)-[e1]-(n2)-[e3]-(n3), (n1)-[e4]-(n2)-[e3]-(n3) Example: ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (s:Actor {name: 'Charlie Sheen'}), (t:Actor {name: 'Kevin Bacon'}) CALL algo.SPpaths( {sourceNode: s, targetNode: t, relTypes: ['r1', 'r2', 'r3'], relDirection: 'outgoing', pathCount: 1, weightProp: 'weight', costProp: 'cost', maxLen: 3, maxCost: 100} ) YIELD path, pathCost, pathWeight RETURN path ORDER BY pathCost" ``` ##### Single-Source minimal-weight bounded-cost bounded-length paths (Since RedisGraph v2.10) The `algo.SSpaths` procedure returns one, *n*, or all minimal-weight, [optionally] bounded-cost, [optionally] bounded-length distinct paths from a given entity. Each path is a sequence of distinct nodes connected by distinct edges. `algo.SSpaths()` is a MATCH mode in which only the paths matching all criteria are captured. The source node must be bound in an earlier WITH-demarcated scope to invoke `algo.SSpaths()`. Input arguments: * A map containing: + `sourceNode`: Mandatory. Must be of type node + `relTypes`: Optional. Array of zero or more relationship types. A relationship must have one of these types to be part of the path. If not specified or empty: the path may contain any relationship. + `relDirection`: Optional. string. one of `'incoming'`, `'outgoing'`, `'both'`. If not specified: `'outgoing'`. + `pathCount`: Optional. Number of minimal-weight paths to retrieve. Non-negative integer. If not specified: 1 This number is global (not per source-target pair); all returned paths may be with the same target. - `0`: retrieve all minimal-weight paths (all reported paths have the same weight) Order: 1st : minimal cost, 2nd: minimal length. - `1`: retrieve a single minimal-weight path When multiple equal-weight paths exist: (preferences: 1st : minimal cost, 2nd: minimal length) - *n* > 1: retrieve up to *n* minimal-weight paths (reported paths may have different weights) When multiple equal-weight paths exist: (preferences: 1st : minimal cost, 2nd: minimal length) + `weightProp`: Optional. If not specified: use the default weight: 1 for each relationship. The name of the property that represents the weight of each relationship (integer / float) If such property doesn’t exist, of if its value is not a positive numeric - use the default weight: 1 Note: when all weights are equal: minimal-weight ≡ shortest-path. + `costProp`: Optional. If not specified: use the default cost: 1 for each relationship. The name of the property that represents the cost of each relationship (integer / float) If such property doesn't exist, or if its value is not a positive numeric - use the default cost: 1 + `maxLen`: Optional. Maximal path length (number of relationships along the path). Positive integer. If not specified: no maximal length constraint. + `maxCost`: Optional. Positive numeric. If not specified: no maximal cost constraint. The maximal cumulative cost for the relationships along the path. Result: * Paths conforming to the input arguments. For each reported path: + `path` - the path + `pathWeight` - the path’s weight + `pathCost` - the path’s costTo retrieve additional information: + The path’s length can be retrieved with `length(path)` + An array of the nodes along the path can be retrieved with `nodes(path)` + The path’s first node can be retrieved with `nodes(path)[0]` + The path’s last node can be retrieved with `nodes(path)[-1]` + An array of the relationship's costs along the path can be retrieved with `[r in relationships(path) | r.cost]` where cost is the name of the cost property + An array of the relationship's weights along the path can be retrieved with `[r in relationships(path) | r.weight]` where weight is the name of the weight property Behavior in presence on multiple-edges: * multi-edges are two or more edges connecting the same pair of vertices (possibly with different weights and costs). * All matching edges are considered. Paths with identical vertices and different edges are different paths. The following are 3 different paths ('n1', 'n2', and 'n3' are nodes; 'e1', 'e2', 'e3', and 'e4' are edges): (n1)-[e1]-(n2)-[e2]-(n3), (n1)-[e1]-(n2)-[e3]-(n3), (n1)-[e4]-(n2)-[e3]-(n3) Example: ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (s:Actor {name: 'Charlie Sheen'}) CALL algo.SSpaths( {sourceNode: s, relTypes: ['r1', 'r2', 'r3'], relDirection: 'outgoing', pathCount: 1, weightProp: 'weight', costProp: 'cost', maxLen: 3, maxCost: 100} ) YIELD path, pathCost, pathWeight RETURN path ORDER BY pathCost" ``` #### OPTIONAL MATCH The OPTIONAL MATCH clause is a MATCH variant that produces null values for elements that do not match successfully, rather than the all-or-nothing logic for patterns in MATCH clauses. It can be considered to fill the same role as LEFT/RIGHT JOIN does in SQL, as MATCH entities must be resolved but nodes and edges introduced in OPTIONAL MATCH will be returned as nulls if they cannot be found. OPTIONAL MATCH clauses accept the same patterns as standard MATCH clauses, and may similarly be modified by WHERE clauses. Multiple MATCH and OPTIONAL MATCH clauses can be chained together, though a mandatory MATCH cannot follow an optional one. ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (p:Person) OPTIONAL MATCH (p)-[w:WORKS\_AT]->(c:Company) WHERE w.start\_date > 2016 RETURN p, w, c" ``` All `Person` nodes are returned, as well as any `WORKS_AT` relations and `Company` nodes that can be resolved and satisfy the `start_date` constraint. For each `Person` that does not resolve the optional pattern, the person will be returned as normal and the non-matching elements will be returned as null. Cypher is lenient in its handling of null values, so actions like property accesses and function calls on null values will return null values rather than emit errors. ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (p:Person) OPTIONAL MATCH (p)-[w:WORKS\_AT]->(c:Company) RETURN p, w.department, ID(c) as ID" ``` In this case, `w.department` and `ID` will be returned if the OPTIONAL MATCH was successful, and will be null otherwise. Clauses like SET, CREATE, MERGE, and DELETE will ignore null inputs and perform the expected updates on real inputs. One exception to this is that attempting to create a relation with a null endpoint will cause an error: ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (p:Person) OPTIONAL MATCH (p)-[w:WORKS\_AT]->(c:Company) CREATE (c)-[:NEW\_RELATION]->(:NEW\_NODE)" ``` If `c` is null for any record, this query will emit an error. In this case, no changes to the graph are committed, even if some values for `c` were resolved. #### WHERE This clause is not mandatory, but if you want to filter results, you can specify your predicates here. Supported operations: * `=` * `<>` * `<` * `<=` * `>` * `>=` * `CONTAINS` * `ENDS WITH` * `IN` * `STARTS WITH` Predicates can be combined using AND / OR / NOT. Be sure to wrap predicates within parentheses to control precedence. Examples: ``` WHERE (actor.name = "john doe" OR movie.rating > 8.8) AND movie.votes <= 250) ``` ``` WHERE actor.age >= director.age AND actor.age > 32 ``` It is also possible to specify equality predicates within nodes using the curly braces as such: ``` (:President {name:"Jed Bartlett"})-[:WON]->(:State) ``` Here we've required that the president node's name will have the value "Jed Bartlett". There's no difference between inline predicates and predicates specified within the WHERE clause. It is also possible to filter on graph patterns. The following queries, which return all presidents and the states they won in, produce the same results: ``` MATCH (p:President), (s:State) WHERE (p)-[:WON]->(s) RETURN p, s ``` and ``` MATCH (p:President)-[:WON]->(s:State) RETURN p, s ``` Pattern predicates can be also negated and combined with the logical operators AND, OR, and NOT. The following query returns all the presidents that did not win in the states where they were governors: ``` MATCH (p:President), (s:State) WHERE NOT (p)-[:WON]->(s) AND (p)->[:governor]->(s) RETURN p, s ``` Nodes can also be filtered by label: ``` MATCH (n)-[:R]->() WHERE n:L1 OR n:L2 RETURN n ``` When possible, it is preferable to specify the label in the node pattern of the MATCH clause. #### RETURN In its simple form, Return defines which properties the returned result-set will contain. Its structure is a list of `alias.property` separated by commas. For convenience, it's possible to specify the alias only when you're interested in every attribute an entity possesses, and don't want to specify each attribute individually. For example: ``` RETURN movie.title, actor ``` Use the DISTINCT keyword to remove duplications within the result-set: ``` RETURN DISTINCT friend\_of\_friend.name ``` In the above example, suppose we have two friends, Joe and Miesha, and both know Dominick. DISTINCT will make sure Dominick will only appear once in the final result set. Return can also be used to aggregate data, similar to group by in SQL. Once an aggregation function is added to the return list, all other "none" aggregated values are considered as group keys, for example: ``` RETURN movie.title, MAX(actor.age), MIN(actor.age) ``` Here we group data by movie title and for each movie, and we find its youngest and oldest actor age. #### Aggregations Supported aggregation functions include: * `avg` * `collect` * `count` * `max` * `min` * `percentileCont` * `percentileDisc` * `stDev` * `sum` #### ORDER BY Order by specifies that the output be sorted and how. You can order by multiple properties by stating each variable in the ORDER BY clause. Each property may specify its sort order with `ASC`/`ASCENDING` or `DESC`/`DESCENDING`. If no order is specified, it defaults to ascending. The result will be sorted by the first variable listed. For equal values, it will go to the next property in the ORDER BY clause, and so on. ``` ORDER BY <alias.property [ASC/DESC] list> ``` Below we sort our friends by height. For equal heights, weight is used to break ties. ``` ORDER BY friend.height, friend.weight DESC ``` #### SKIP The optional skip clause allows a specified number of records to be omitted from the result set. ``` SKIP <number of records to skip> ``` This can be useful when processing results in batches. A query that would examine the second 100-element batch of nodes with the label `Person`, for example, would be: ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (p:Person) RETURN p ORDER BY p.name SKIP 100 LIMIT 100" ``` #### LIMIT Although not mandatory, you can use the limit clause to limit the number of records returned by a query: ``` LIMIT <max records to return> ``` If not specified, there's no limit to the number of records returned by a query. #### CREATE CREATE is used to introduce new nodes and relationships. The simplest example of CREATE would be a single node creation: ``` CREATE (n) ``` It's possible to create multiple entities by separating them with a comma. ``` CREATE (n),(m) ``` ``` CREATE (:Person {name: 'Kurt', age: 27}) ``` To add relations between nodes, in the following example we first find an existing source node. After it's found, we create a new relationship and destination node. ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (a:Person) WHERE a.name = 'Kurt' CREATE (a)-[:MEMBER]->(:Band {name:'Nirvana'})" ``` Here the source node is a bounded node, while the destination node is unbounded. As a result, a new node is created representing the band Nirvana and a new relation connects Kurt to the band. Lastly we create a complete pattern. All entities within the pattern which are not bounded will be created. ``` GRAPH.QUERY DEMO\_GRAPH "CREATE (jim:Person{name:'Jim', age:29})-[:FRIENDS]->(pam:Person {name:'Pam', age:27})-[:WORKS]->(:Employer {name:'Dunder Mifflin'})" ``` This query will create three nodes and two relationships. #### DELETE DELETE is used to remove both nodes and relationships. Note that deleting a node also deletes all of its incoming and outgoing relationships. To delete a node and all of its relationships: ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (p:Person {name:'Jim'}) DELETE p" ``` To delete relationship: ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (:Person {name:'Jim'})-[r:FRIENDS]->() DELETE r" ``` This query will delete all `friend` outgoing relationships from the node with the name 'Jim'. #### SET SET is used to create or update properties on nodes and relationships. To set a property on a node, use [`SET`](../set). ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (n { name: 'Jim' }) SET n.name = 'Bob'" ``` If you want to set multiple properties in one go, simply separate them with a comma to set multiple properties using a single SET clause. ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (n { name: 'Jim', age:32 }) SET n.age = 33, n.name = 'Bob'" ``` The same can be accomplished by setting the graph entity variable to a map: ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (n { name: 'Jim', age:32 }) SET n = {age: 33, name: 'Bob'}" ``` Using `=` in this way replaces all of the entity's previous properties, while `+=` will only set the properties it explicitly mentions. In the same way, the full property set of a graph entity can be assigned or merged: ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (jim {name: 'Jim'}), (pam {name: 'Pam'}) SET jim = pam" ``` After executing this query, the `jim` node will have the same property set as the `pam` node. To remove a node's property, simply set property value to NULL. ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (n { name: 'Jim' }) SET n.name = NULL" ``` #### MERGE The MERGE clause ensures that a path exists in the graph (either the path already exists, or it needs to be created). MERGE either matches existing nodes and binds them, or it creates new data and binds that. It’s like a combination of MATCH and CREATE that also allows you to specify what happens if the data was matched or created. For example, you can specify that the graph must contain a node for a user with a certain name. If there isn’t a node with the correct name, a new node will be created and its name property set. Any aliases in the MERGE path that were introduced by earlier clauses can only be matched; MERGE will not create them. When the MERGE path doesn't rely on earlier clauses, the whole path will always either be matched or created. If all path elements are introduced by MERGE, a match failure will cause all elements to be created, even if part of the match succeeded. The MERGE path can be followed by ON MATCH SET and ON CREATE SET directives to conditionally set properties depending on whether or not the match succeeded. **Merging nodes** To merge a single node with a label: ``` GRAPH.QUERY DEMO\_GRAPH "MERGE (robert:Critic)" ``` To merge a single node with properties: ``` GRAPH.QUERY DEMO\_GRAPH "MERGE (charlie { name: 'Charlie Sheen', age: 10 })" ``` To merge a single node, specifying both label and property: ``` GRAPH.QUERY DEMO\_GRAPH "MERGE (michael:Person { name: 'Michael Douglas' })" ``` **Merging paths** Because MERGE either matches or creates a full path, it is easy to accidentally create duplicate nodes. For example, if we run the following query on our sample graph: ``` GRAPH.QUERY DEMO\_GRAPH "MERGE (charlie { name: 'Charlie Sheen '})-[r:ACTED\_IN]->(wallStreet:Movie { name: 'Wall Street' })" ``` Even though a node with the name 'Charlie Sheen' already exists, the full pattern does not match, so 1 relation and 2 nodes - including a duplicate 'Charlie Sheen' node - will be created. We should use multiple MERGE clauses to merge a relation and only create non-existent endpoints: ``` GRAPH.QUERY DEMO\_GRAPH "MERGE (charlie { name: 'Charlie Sheen' }) MERGE (wallStreet:Movie { name: 'Wall Street' }) MERGE (charlie)-[r:ACTED\_IN]->(wallStreet)" ``` If we don't want to create anything if pattern elements don't exist, we can combine MATCH and MERGE clauses. The following query merges a relation only if both of its endpoints already exist: ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (charlie { name: 'Charlie Sheen' }) MATCH (wallStreet:Movie { name: 'Wall Street' }) MERGE (charlie)-[r:ACTED\_IN]->(wallStreet)" ``` **On Match and On Create directives** Using ON MATCH and ON CREATE, MERGE can set properties differently depending on whether a pattern is matched or created. In this query, we'll merge paths based on a list of properties and conditionally set a property when creating new entities: ``` GRAPH.QUERY DEMO\_GRAPH "UNWIND ['Charlie Sheen', 'Michael Douglas', 'Tamara Tunie'] AS actor\_name MATCH (movie:Movie { name: 'Wall Street' }) MERGE (person {name: actor\_name})-[:ACTED\_IN]->(movie) ON CREATE SET person.first\_role = movie.name" ``` #### WITH The WITH clause allows parts of queries to be independently executed and have their results handled uniquely. This allows for more flexible query composition as well as data manipulations that would otherwise not be possible in a single query. If, for example, we wanted to find all children in our graph who are above the average age of all people: ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (p:Person) WITH AVG(p.age) AS average\_age MATCH (:Person)-[:PARENT\_OF]->(child:Person) WHERE child.age > average\_age return child ``` This also allows us to use modifiers like `DISTINCT`, `SKIP`, `LIMIT`, and `ORDER` that otherwise require `RETURN` clauses. ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (u:User) WITH u AS nonrecent ORDER BY u.lastVisit LIMIT 3 SET nonrecent.should\_contact = true" ``` #### UNWIND The UNWIND clause breaks down a given list into a sequence of records; each contains a single element in the list. The order of the records preserves the original list order. ``` GRAPH.QUERY DEMO\_GRAPH "CREATE (p {array:[1,2,3]})" ``` ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (p) UNWIND p.array AS y RETURN y" ``` #### FOREACH (Since RedisGraph v2.12) The `FOREACH` clause feeds the components of a list to a sub-query comprised of **updating clauses only** (`CREATE`, `MERGE`, [`SET`](../set), `REMOVE`, `DELETE` and `FOREACH`), while passing on the records it receives without change. The clauses within the sub-query recognize the bound variables defined prior to the `FOREACH` clause, but are local in the sense that later clauses are not aware of the variables defined inside them. In other words, `FOREACH` uses the current context, and does not affect it. The `FOREACH` clause can be used for numerous purposes, such as: Updating and creating graph entities in a concise manner, marking nodes\edges that satisfy some condition or are part of a path of interest and performing conditional queries. We show examples of queries performing the above 3 use-cases. The following query will create 5 nodes, each with property `v` with the values from 0 to 4 corresponding to the appropriate index in the list. ``` GRAPH.QUERY DEMO\_GRAPH "FOREACH(i in [1, 2, 3, 4] | CREATE (n:N {v: i}))" ``` The following query marks the nodes of all paths of length up to 15 km from a hotel in Toronto to a steakhouse with at least 2 Michelin stars. ``` GRAPH.QUERY DEMO\_GRAPH "MATCH p = (hotel:HOTEL {City: 'Toronto'})-[r:ROAD\*..5]->(rest:RESTAURANT {type: 'Steakhouse'}) WHERE sum(r.length) <= 15 AND hotel.stars >= 4 AND rest.Michelin\_stars >= 2 FOREACH(n in nodes(p) | SET n.part\_of\_path = true)" ``` The following query searches for all the hotels, checks whether they buy directly from a bakery, and if not - makes sure they are marked as buying from a supplier that supplies bread, and that they do not buy directly from a bakery. ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (h:HOTEL) OPTIONAL MATCH (h)-[b:BUYS\_FROM]->(bakery:BAKERY) FOREACH(do\_perform IN CASE WHEN b = NULL THEN [1] ELSE [] END | MERGE (h)-[b2:BUYS\_FROM]->(s:SUPPLIER {supplies\_bread: true}) SET b2.direct = false)" ``` #### UNION The UNION clause is used to combine the result of multiple queries. UNION combines the results of two or more queries into a single result set that includes all the rows that belong to all queries in the union. The number and the names of the columns must be identical in all queries combined by using UNION. To keep all the result rows, use UNION ALL. Using just UNION will combine and remove duplicates from the result set. ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (n:Actor) RETURN n.name AS name UNION ALL MATCH (n:Movie) RETURN n.title AS name" ``` ### Functions This section contains information on all supported functions from the Cypher query language. * [Predicate functions](#predicate-functions) * [Scalar functions](#scalar-functions) * [Aggregating functions](#aggregating-functions) * [List functions](#list-functions) * [Mathematical operators](#mathematical-operators) * [Mathematical functions](#mathematical-functions) * [Trigonometric functions](#trigonometric-functions) * [String functions](#string-functions) * [Point functions](#point-functions) * [Type conversion functions](#type-conversion-functions) * [Node functions](#node-functions) * [Path functions](#path-functions) Predicate functions ------------------- | Function | Description | | --- | --- | | [all(*var* IN *list* WHERE *predicate*)](#existential-comprehension-functions) | Returns true when *predicate* holds true for all elements in *list* | | [any(*var* IN *list* WHERE *predicate*)](#existential-comprehension-functions) | Returns true when *predicate* holds true for at least one element in *list* | | exists(*pattern*) | Returns true when at least one match for *pattern* exists | | isEmpty(*list*|*map*|*string*) | Returns true if the input list or map contains no elements or if the input string contains no characters Returns null when the input evaluates to null | | [none(*var* IN *list* WHERE *predicate*)](#existential-comprehension-functions) | Returns true when *predicate* holds false for all elements in *list* | | [single(*var* IN *list* WHERE *predicate*)](#existential-comprehension-functions) | Returns true when *predicate* holds true for exactly one element in *list* | Scalar functions ---------------- | Function | Description | | --- | --- | | coalesce(*expr*[, expr...]) | Returns the evaluation of the first argument that evaluates to a non-null value Returns null when all arguments evaluate to null | | endNode(*relationship*) | Returns the destination node of a relationship Returns null when *relationship* evaluates to null | | hasLabels(*node*, *labelsList*) \* | Returns true when *node* contains all labels in *labelsList*, otherwise false Return true when *labelsList* evaluates to an empty list | | id(*node*|*relationship*) | Returns the internal ID of a node or relationship (which is not immutable) | | labels(*node*) | Returns a list of strings: all labels of *node* Returns null when *node* evaluates to null | | properties(*expr*) | When *expr* is a node or relationship: Returns a map containing all the properties of the given node or relationship When *expr* evaluates to a map: Returns *expr* unchanged Returns null when *expr* evaluates to null | | randomUUID() | Returns a random UUID (Universal Unique IDentifier) | | startNode(*relationship*) | Returns the source node of a relationship Returns null when *relationship* evaluates to null | | timestamp() | Returns the current system timestamp (milliseconds since epoch) | | type(*relationship*) | Returns a string: the type of *relationship* Returns null when *relationship* evaluates to null | | typeOf(*expr*) \* | (Since RedisGraph v2.12) Returns a string: the type of a literal, an expression's evaluation, an alias, a node's property, or a relationship's property Return value is one of `Map`, `String`, `Integer`, `Boolean`, `Float`, `Node`, `Edge`, `List`, `Path`, `Point`, or `Null` | \* RedisGraph-specific extensions to Cypher Aggregating functions --------------------- | Function | Description | | --- | --- | | avg(*expr*) | Returns the average of a set of numeric values. null values are ignored Returns null when *expr* has no evaluations | | collect(*expr*) | Returns a list containing all non-null elements which evaluated from a given expression | | count(*expr*|\*) | When argument is *expr*: returns the number of non-null evaluations of *expr* When argument is `*`: returns the total number of evaluations (including nulls) | | max(*expr*) | Returns the maximum value in a set of values (taking into account type ordering). null values are ignored Returns null when *expr* has no evaluations | | min(*expr*) | Returns the minimum value in a set of values (taking into account type ordering). null values are ignored Returns null when *expr* has no evaluations | | percentileCont(*expr*, *percentile*) | Returns a linear-interpolated percentile (between 0.0 and 1.0) over a set of numeric values. null values are ignored Returns null when *expr* has no evaluations | | percentileDisc(*expr*, *percentile*) | Returns a nearest-value percentile (between 0.0 and 1.0) over a set of numeric values. null values are ignored Returns null when *expr* has no evaluations | | stDev(*expr*) | Returns the sample standard deviation over a set of numeric values. null values are ignored Returns null when *expr* has no evaluations | | stDevP(*expr*) | Returns the population standard deviation over a set of numeric values. null values are ignored Returns null when *expr* has no evaluations | | sum(*expr*) | Returns the sum of a set of numeric values. null values are ignored Returns 0 when *expr* has no evaluations | List functions -------------- | Function | Description | | --- | --- | | head(*expr*) | Returns the first element of a list Returns null when *expr* evaluates to null or an empty list | | keys(*expr*) | Returns a list of strings: all key names for given map or all property names for a given node or edge Returns null when *expr* evaluates to null | | last(*expr*) | Returns the last element of a list Returns null when *expr* evaluates to null or an empty list | | list.dedup(*list*) \* | (Since RedisGraph v2.12) Given a list, returns a similar list after removing duplicate elements Order is preserved, duplicates are removed from the end of the list Returns null when *list* evaluates to null Emit an error when *list* does not evaluate to a list or to null | | list.insert(*list*, *idx*, *val*[, *dups* = TRUE]) \* | (Since RedisGraph v2.12) Given a list, returns a list after inserting a given value at a given index *idx* is 0-based when non-negative, or from the end of the list when negative Returns null when *list* evaluates to null Returns *list* when *val* evaluates to null Returns *list* when *idx* evaluates to an integer not in [-NumItems-1 .. NumItems] When *dups* evaluats to FALSE: returns *list* when *val* evaluates to a value that is already an element of *list* Emit an error when *list* does not evaluate to a list or to null Emit an error when *idx* does not evaluate to an integer Emit an error when *dups*, if specified, does not evaluate to a Boolean | | list.insertListElements(*list*, *list2*, *idx*[, *dups* = TRUE]) \* | (Since RedisGraph v2.12) Given a list, returns a list after inserting the elements of a second list at a given index *idx* is 0-based when non-negative, or from the end of the list when negative Returns null when *list* evaluates to null Returns *list* when *list2* evaluates to null Returns *list* when *idx* evaluates to an integer not in [-NumItems-1 .. NumItems] When *dups* evaluates to FALSE: If an element of *list2* evaluates to an element of *list* it would be skipped; If multiple elements of *list2* evaluate to the same value - this value would be inserted at most once to *list* Emit an error when *list* does not evaluate to a list or to null Emit an error when *list2* does not evaluate to a list or to null Emit an error when *idx* does not evaluate to an integer Emit an error when *dups*, if specified, does not evaluate to a Boolean | | list.remove(*list*, *idx*[, *count* = 1]) \* | (Since RedisGraph v2.12) Given a list, returns a list after removing a given number of consecutive elements (or less, if the end of the list has been reached). starting at a given index. *idx* is 0-based when non-negative, or from the end of the list when negative Returns *null* when *list* evaluates to null Returns *list* when *idx* evaluates to an integer not in [-NumItems .. NumItems-1] Returns *list* when *count* evaluates to a non-positive integer Emit an error when *list* does not evaluate to a list or to null Emit an error when *idx* does not evaluate to an integer Emit an error when *count*, if specified, does not evaluate to an integer | | list.sort(*list*[, *ascending* = TRUE]) \* | (Since RedisGraph v2.12) Given a list, returns a list with similar elements, but sorted (inversely-sorted if *ascending* is evaluated to FALSE) Returns null when *list* evaluates to null Emit an error when *list* does not evaluate to a list or to null Emit an error when *ascending*, if specified, does not evaluate to a Boolean | | range(*first*, *last*[, *step* = 1]) | Returns a list of integers in the range of [start, end]. *step*, an optional integer argument, is the increment between consequtive elements | | size(*expr*) | Returns the number of elements in a list Returns null with *expr* evaluates to null | | tail(*expr*) | Returns a sublist of a list, which contains all its elements except the first Returns an empty list when *expr* containst less than 2 elements. Returns null when *expr* evaluates to null | | [reduce(...)](#reduce) | Returns a scalar produced by evaluating an expression against each list member | \* RedisGraph-specific extensions to Cypher Mathematical operators ---------------------- | Function | Description | | --- | --- | | + | Add two values | | - | Subtract second value from first | | \* | Multiply two values | | / | Divide first value by the second | | ^ | Raise the first value to the power of the second | | % | Perform modulo division of the first value by the second | Mathematical functions ---------------------- | Function | Description | | --- | --- | | abs(*expr*) | Returns the absolute value of a numeric value Returns null when *expr* evaluates to null | | ceil(*expr*) \*\* | When *expr* evaluates to an integer: returns its evaluation When *expr* evaluates to floating point: returns a floating point equals to the smallest integer greater than or equal to *expr* Returns null when *expr* evaluates to null | | e() | Returns the constant *e*, the base of the natural logarithm | | exp(*expr*) | Returns *e*^*expr*, where *e* is the base of the natural logarithm Returns null when *expr* evaluates to null | | floor(*expr*) \*\* | When *expr* evaluates to an integer: returns its evaluation When *expr* evaluates to a floating point: returns a floating point equals to the greatest integer less than or equal to *expr* Returns null when *expr* evaluates to null | | log(*expr*) | Returns the natural logarithm of a numeric value Returns nan when *expr* evaluates to a negative numeric value, -inf when *expr* evaluates to 0, and null when *expr* evaluates to null | | log10(*expr*) | Returns the base-10 logarithm of a numeric value Returns nan when *expr* evaluates to a negative numeric value, -inf when *expr* evaluates to 0, and null when *expr* evaluates to null | | pow(*base*, *exponent*) \* | Returns *base* raised to the power of *exponent* (equivalent to *base*^*exponent*) Returns null when either evaluates to null | | rand() | Returns a random floating point in the range [0,1] | | round(*expr*) \*\* \*\*\* | When *expr* evaluates to an integer: returns its evaluation When *expr* evaluates to a floating point: returns a floating point equals to the integer closest to *expr* Returns null when *expr* evaluates to null | | sign(*expr*) | Returns the signum of a numeric value: 0 when *expr* evaluates to 0, -1 when *expr* evaluates to a negative numeric value, and 1 when *expr* evaluates to a positive numeric value Returns null when *expr* evaluates to null | | sqrt(*expr*) | Returns the square root of a numeric value Returns nan when *expr* evaluates to a negative value and null when *expr* evaluates to null | \* RedisGraph-specific extensions to Cypher \*\* RedisGraph-specific behavior: to avoid possible loss of precision, when *expr* evaluates to an integer - the result is an integer as well \*\*\* RedisGraph-specific behavior: tie-breaking method is "half away from zero" Trigonometric functions ----------------------- | Function | Description | | --- | --- | | acos(*expr*) | Returns the arccosine, in radians, of a numeric value Returns nan when *expr* evaluates to a numeric value not in [-1, 1] and null when *expr* evaluates to null | | asin(*expr*) | Returns the arcsine, in radians, of a numeric value Returns nan when *expr* evaluates to a numeric value not in [-1, 1] and null when *expr* evaluates to null | | atan(*expr*) | Returns the arctangent, in radians, of a numeric value Returns null when *expr* evaluates to null | | atan2(*expr*, *expr*) | Returns the 2-argument arctangent, in radians, of a pair of numeric values (Cartesian coordinates) Returns 0 when both expressions evaluate to 0 Returns null when either expression evaluates to null | | cos(*expr*) | Returns the cosine of a numeric value that represents an angle in radians Returns null when *expr* evaluates to null | | cot(*expr*) | Returns the cotangent of a numeric value that represents an angle in radians Returns inf when *expr* evaluates to 0 and null when *expr* evaluates to null | | degrees(*expr*) | Converts a numeric value from radians to degrees Returns null when *expr* evaluates to null | | haversin(*expr*) | Returns half the versine of a numeric value that represents an angle in radians Returns null when *expr* evaluates to null | | pi() | Returns the mathematical constant *pi* | | radians(*expr*) | Converts a numeric value from degrees to radians Returns null when *expr* evaluates to null | | sin(*expr*) | Returns the sine of a numeric value that represents an angle in radians Returns null when *expr* evaluates to null | | tan(*expr*) | Returns the tangent of a numeric value that represents an angle in radians Returns null when *expr* evaluates to null | String functions ---------------- | Function | Description | | --- | --- | | left(*str*, *len*) | Returns a string containing the *len* leftmost characters of *str* Returns null when *str* evaluates to null, otherwise emit an error if *len* evaluates to null | | lTrim(*str*) | Returns *str* with leading whitespace removed Returns null when *str* evaluates to null | | replace(*str*, *search*, *replace*) | Returns *str* with all occurrences of *search* replaced with *replace* Returns null when any argument evaluates to null | | reverse(*str*) | Returns a string in which the order of all characters in *str* are reversed Returns null when *str* evaluates to null | | right(*str*, *len*) | Returns a string containing the *len* rightmost characters of *str* Returns null when *str* evaluates to null, otherwise emit an error if *len* evaluates to null | | rTrim(*str*) | Returns *str* with trailing whitespace removed Returns null when *str* evaluates to null | | split(*str*, *delimiter*) | Returns a list of strings from splitting *str* by *delimiter* Returns null when any argument evaluates to null | | string.join(*strList*[, *delimiter* = '']) \* | (Since RedisGraph v2.12) Returns a concatenation of a list of strings using a given delimiter Returns null when *strList* evaluates to null Returns null when *delimiter*, if specified, evaluates to null Emit an error when *strList* does not evaluate to a list or to null Emit an error when an element of *strList* does not evaluate to a string Emit an error when *delimiter*, if specified, does not evaluate to a string or to null | | string.matchRegEx(*str*, *regex*) \* | (Since RedisGraph v2.12) Given a string and a regular expression, returns a list of all matches and matching regions Returns an empty list when *str* evaluates to null Returns an empty list when *regex* evaluates to null Emit an error when *str* does not evaluate to a string or to null Emit an error when *regex* does not evaluate to a valid regex string or to null | | string.replaceRegEx(*str*, *regex*, *replacement*) \* | (Since RedisGraph v2.12) Given a string and a regular expression, returns a string after replacing each regex match with a given replacement Returns null when *str* evaluates to null Returns null when *regex* evaluates to null Returns null when *replacement* evaluates to null Emit an error when *str* does not evaluate to a string or to null Emit an error when *regex* does not evaluate to a valid regex string or to null Emit an error when *replacement* does not evaluate to a string or to null | | substring(*str*, *start*[, *len*]) | When *len* is specified: returns a substring of *str* beginning with a 0-based index *start* and with length *len* When *len* is not specified: returns a substring of *str* beginning with a 0-based index *start* and extending to the end of *str* Returns null when *str* evaluates to null Emit an error when *start* or *len* evaluate to null | | toLower(*str*) | Returns *str* in lowercase Returns null when *str* evaluates to null | | toJSON(*expr*) \* | Returns a [JSON representation](#json-format) of a value Returns null when *expr* evaluates to null | | toUpper(*str*) | Returns *str* in uppercase Returns null when *str* evaluates to null | | trim(*str*) | Returns *str* with leading and trailing whitespace removed Returns null when *str* evaluates to null | | size(*str*) | Returns the number of characters in *str* Returns null when *str* evaluates to null | \* RedisGraph-specific extensions to Cypher Point functions --------------- | Function | Description | | --- | --- | | [point(*map*)](#point) | Returns a Point representing a lat/lon coordinates | | distance(*point1*, *point2*) | Returns the distance in meters between the two given points Returns null when either evaluates to null | Type conversion functions ------------------------- | Function | Description | | --- | --- | | toBoolean(*expr*) | Returns a Boolean when *expr* evaluates to a Boolean Converts a string to Boolean (`"true"` (case insensitive) to true, `"false"` (case insensitive) to false, any other value to null) Converts an integer to Boolean (0 to `false`, any other values to `true`) Returns null when *expr* evaluates to null Emit an error on other types | | toBooleanList(*exprList*) | Converts a list to a list of Booleans. Each element in the list is converted using toBooleanOrNull() | | toBooleanOrNull(*expr*) | Returns a Boolean when *expr* evaluates to a Boolean Converts a string to Boolean (`"true"` (case insensitive) to true, `"false"` (case insensitive) to false, any other value to null) Converts an integer to Boolean (0 to `false`, any other values to `true`) Returns null when *expr* evaluates to null Returns null for other types | | toFloat(*expr*) | Returns a floating point when *expr* evaluates to a floating point Converts an integer to a floating point Converts a string to a floating point or null Returns null when *expr* evaluates to null Emit an error on other types | | toFloatList(*exprList*) | Converts a list to a list of floating points. Each element in the list is converted using toFloatOrNull() | | toFloatOrNull(*expr*) | Returns a floating point when *expr* evaluates to a floating point Converts an integer to a floating point Converts a string to a floating point or null Returns null when *expr* evaluates to null Returns null for other types | | toInteger(*expr*) \* | Returns an integer when *expr* evaluates to an integer Converts a floating point to integer Converts a string to an integer or null Converts a Boolean to an integer (false to 0, true to 1) (Since RedisGraph v2.10.8) Returns null when *expr* evaluates to null Emit an error on other types | | toIntegerList(*exprList*) \* | Converts a list to a list of integer values. Each element in the list is converted using toIntegerOrNull() | | toIntegerOrNull(*expr*) \* | Returns an integer when *expr* evaluates to an integer Converts a floating point to integer Converts a string to an integer or null Converts a Boolean to an integer (false to 0, true to 1) (Since RedisGraph v2.10.8) Returns null when *expr* evaluates to null Returns null for other types | | toString(*expr*) | Returns a string when *expr* evaluates to a string Converts an integer, float, Boolean, string, or point to a string representation Returns null when *expr* evaluates to null Emit an error on other types | | toStringList(*exprList*) | Converts a list to a list of strings. Each element in the list is converted using toStringOrNull() | | toStringOrNull(*expr*) | Returns a string when *expr* evaluates to a string Converts an integer, float, Boolean, string, or point to a string representation Returns null when *expr* evaluates to null Returns null for other types | \* RedisGraph-specific behavior: rounding method when converting a floating point to an integer is "toward negative infinity (floor)" Node functions -------------- | Function | Description | | --- | --- | | indegree(*node* [, *reltype* ...]) \* indegree(*node* [, *reltypeList*]) \* | When no relationship types are specified: Returns the number of *node*'s incoming edges When one or more relationship types are specified: Returns the number of *node's* incoming edges with one of the given relationship types Return null when *node* evaluates to null the *reltypeList* syntax is supported since RedisGraph v2.10.8 | | outdegree(*node* [, *reltype* ...]) \* outdegree(*node* [, *reltypeList*]) \* | When no relationship types are specified: Returns the number of *node*'s outgoing edges When one or more relationship types are specified: Returns the number of *node's* outgoing edges with one of the given relationship types Return null when *node* evaluates to null the *reltypeList* syntax is supported since RedisGraph v2.10.8 | \* RedisGraph-specific extensions to Cypher Path functions -------------- | Function | Description | | --- | --- | | nodes(*path*) | Returns a list containing all the nodes in *path* Returns null if *path* evaluates to null | | relationships(*path*) | Returns a list containing all the relationships in *path* Returns null if *path* evaluates to null | | length(*path*) | Return the length (number of edges) of *path* Returns null if *path* evaluates to null | | [shortestPath(...)](#shortestPath) \* | Return the shortest path that resolves the given pattern | \* RedisGraph-specific extensions to Cypher ### List comprehensions List comprehensions are a syntactical construct that accepts an array and produces another based on the provided map and filter directives. They are a common construct in functional languages and modern high-level languages. In Cypher, they use the syntax: ``` [element IN array WHERE condition | output elem] ``` * `array` can be any expression that produces an array: a literal, a property reference, or a function call. * `WHERE condition` is an optional argument to only project elements that pass a certain criteria. If omitted, all elements in the array will be represented in the output. * `| output elem` is an optional argument that allows elements to be transformed in the output array. If omitted, the output elements will be the same as their corresponding inputs. The following query collects all paths of any length, then for each produces an array containing the `name` property of every node with a `rank` property greater than 10: ``` MATCH p=()-[\*]->() RETURN [node IN nodes(p) WHERE node.rank > 10 | node.name] ``` #### Existential comprehension functions The functions `any()`, `all()`, `single()` and `none()` use a simplified form of the list comprehension syntax and return a boolean value. ``` any(element IN array WHERE condition) ``` They can operate on any form of input array, but are particularly useful for path filtering. The following query collects all paths of any length in which all traversed edges have a weight less than 3: ``` MATCH p=()-[\*]->() WHERE all(edge IN relationships(p) WHERE edge.weight < 3) RETURN p ``` ### Pattern comprehensions Pattern comprehensions are a method of producing a list composed of values found by performing the traversal of a given graph pattern. The following query returns the name of a `Person` node and a list of all their friends' ages: ``` MATCH (n:Person) RETURN n.name, [(n)-[:FRIEND\_OF]->(f:Person) | f.age] ``` Optionally, a `WHERE` clause may be embedded in the pattern comprehension to filter results. In this query, all friends' ages will be gathered for friendships that started before 2010: ``` MATCH (n:Person) RETURN n.name, [(n)-[e:FRIEND\_OF]->(f:Person) WHERE e.since < 2010 | f.age] ``` ### CASE WHEN The case statement comes in two variants. Both accept an input argument and evaluates it against one or more expressions. The first `WHEN` argument that specifies a value matching the result will be accepted, and the value specified by the corresponding `THEN` keyword will be returned. Optionally, an `ELSE` argument may also be specified to indicate what to do if none of the `WHEN` arguments match successfully. In its simple form, there is only one expression to evaluate and it immediately follows the `CASE` keyword: ``` MATCH (n) RETURN CASE n.title WHEN 'Engineer' THEN 100 WHEN 'Scientist' THEN 80 ELSE n.privileges END ``` In its generic form, no expression follows the `CASE` keyword. Instead, each `WHEN` statement specifies its own expression: ``` MATCH (n) RETURN CASE WHEN n.age < 18 THEN '0-18' WHEN n.age < 30 THEN '18-30' ELSE '30+' END ``` #### Reduce The `reduce()` function accepts a starting value and updates it by evaluating an expression against each element of the list: ``` RETURN reduce(sum = 0, n IN [1,2,3] | sum + n) ``` `sum` will successively have the values 0, 1, 3, and 6, with 6 being the output of the function call. ### Point The `point()` function expects one map argument of the form: ``` RETURN point({latitude: lat\_value, longitude: lon\_val}) ``` The key names `latitude` and `longitude` are case-sensitive. The point constructed by this function can be saved as a node/relationship property or used within the query, such as in a `distance` function call. ### shortestPath The `shortestPath()` function is invoked with the form: ``` MATCH (a {v: 1}), (b {v: 4}) RETURN shortestPath((a)-[:L\*]->(b)) ``` The sole `shortestPath` argument is a traversal pattern. This pattern's endpoints must be resolved prior to the function call, and no property filters may be introduced in the pattern. The relationship pattern may specify any number of relationship types (including zero) to be considered. If a minimum number of edges to traverse is specified, it may only be 0 or 1, while any number may be used for the maximum. If 0 is specified as the minimum, the source node will be included in the returned path. If no shortest path can be found, NULL is returned. ### JSON format `toJSON()` returns the input value in JSON formatting. For primitive data types and arrays, this conversion is conventional. Maps and map projections (`toJSON(node { .prop} )`) are converted to JSON objects, as are nodes and relationships. The format for a node object in JSON is: ``` { "type": "node", "id": id(int), "labels": [label(string) X N], "properties": { property\_key(string): property\_value X N } } ``` The format for a relationship object in JSON is: ``` { "type": "relationship", "id": id(int), "relationship": type(string), "properties": { property\_key(string): property\_value X N } "start": src\_node(node), "end": dest\_node(node) } ``` Procedures ---------- Procedures are invoked using the syntax: ``` GRAPH.QUERY social "CALL db.labels()" ``` Or the variant: ``` GRAPH.QUERY social "CALL db.labels() YIELD label" ``` YIELD modifiers are only required if explicitly specified; by default the value in the 'Yields' column will be emitted automatically. | Procedure | Arguments | Yields | Description | | --- | --- | --- | --- | | db.labels | none | `label` | Yields all node labels in the graph. | | db.relationshipTypes | none | `relationshipType` | Yields all relationship types in the graph. | | db.propertyKeys | none | `propertyKey` | Yields all property keys in the graph. | | db.indexes | none | `type`, `label`, `properties`, `language`, `stopwords`, `entitytype`, `info` | Yield all indexes in the graph, denoting whether they are exact-match or full-text and which label and properties each covers and whether they are indexing node or relationship attributes. | | db.constraints | none | `type`, `label`, `properties`, `entitytype`, `status` | Yield all constraints in the graph, denoting constraint type (UNIQIE/MANDATORY), which label/relationship-type and properties each enforces. | | db.idx.fulltext.createNodeIndex | `label`, `property` [, `property` ...] | none | Builds a full-text searchable index on a label and the 1 or more specified properties. | | db.idx.fulltext.drop | `label` | none | Deletes the full-text index associated with the given label. | | db.idx.fulltext.queryNodes | `label`, `string` | `node`, `score` | Retrieve all nodes that contain the specified string in the full-text indexes on the given label. | | algo.pageRank | `label`, `relationship-type` | `node`, `score` | Runs the pagerank algorithm over nodes of given label, considering only edges of given relationship type. | | [algo.BFS](#BFS) | `source-node`, `max-level`, `relationship-type` | `nodes`, `edges` | Performs BFS to find all nodes connected to the source. A `max level` of 0 indicates unlimited and a non-NULL `relationship-type` defines the relationship type that may be traversed. | | dbms.procedures() | none | `name`, `mode` | List all procedures in the DBMS, yields for every procedure its name and mode (read/write). | ### Algorithms #### BFS The breadth-first-search algorithm accepts 4 arguments: `source-node (node)` - The root of the search. `max-level (integer)` - If greater than zero, this argument indicates how many levels should be traversed by BFS. 1 would retrieve only the source's neighbors, 2 would retrieve all nodes within 2 hops, and so on. `relationship-type (string)` - If this argument is NULL, all relationship types will be traversed. Otherwise, it specifies a single relationship type to perform BFS over. It can yield two outputs: `nodes` - An array of all nodes connected to the source without violating the input constraints. `edges` - An array of all edges traversed during the search. This does not necessarily contain all edges connecting nodes in the tree, as cycles or multiple edges connecting the same source and destination do not have a bearing on the reachability this algorithm tests for. These can be used to construct the directed acyclic graph that represents the BFS tree. Emitting edges incurs a small performance penalty. Indexing -------- RedisGraph supports single-property indexes for node labels and for relationship type. String, numeric, and geospatial data types can be indexed. ### Creating an index for a node label For a node label, the index creation syntax is: ``` GRAPH.QUERY DEMO\_GRAPH "CREATE INDEX FOR (p:Person) ON (p.age)" ``` An old syntax is also supported: ``` GRAPH.QUERY DEMO\_GRAPH "CREATE INDEX ON :Person(age)" ``` After an index is explicitly created, it will automatically be used by queries that reference that label and any indexed property in a filter. ``` GRAPH.EXPLAIN DEMO\_GRAPH "MATCH (p:Person) WHERE p.age > 80 RETURN p" 1) "Results" 2) " Project" 3) " Index Scan | (p:Person)" ``` This can significantly improve the runtime of queries with very specific filters. An index on `:employer(name)`, for example, will dramatically benefit the query: ``` GRAPH.QUERY DEMO\_GRAPH "MATCH (:Employer {name: 'Dunder Mifflin'})-[:EMPLOYS]->(p:Person) RETURN p" ``` An example of utilizing a geospatial index to find `Employer` nodes within 5 kilometers of Scranton is: ``` GRAPH.QUERY DEMO\_GRAPH "WITH point({latitude:41.4045886, longitude:-75.6969532}) AS scranton MATCH (e:Employer) WHERE distance(e.location, scranton) < 5000 RETURN e" ``` Geospatial indexes can currently only be leveraged with `<` and `<=` filters; matching nodes outside of the given radius is performed using conventional matching. ### Creating an index for a relationship type For a relationship type, the index creation syntax is: ``` GRAPH.QUERY DEMO\_GRAPH "CREATE INDEX FOR ()-[f:FOLLOW]-() ON (f.created\_at)" ``` Then the execution plan for using the index: ``` GRAPH.EXPLAIN DEMO\_GRAPH "MATCH (p:Person {id: 0})-[f:FOLLOW]->(fp) WHERE 0 < f.created\_at AND f.created\_at < 1000 RETURN fp" 1) "Results" 2) " Project" 3) " Edge By Index Scan | [f:FOLLOW]" 4) " Node By Index Scan | (p:Person)" ``` This can significantly improve the runtime of queries that traverse super nodes or when we want to start traverse from relationships. ### Deleting an index for a node label For a node label, the index deletion syntax is: ``` GRAPH.QUERY DEMO\_GRAPH "DROP INDEX ON :Person(age)" ``` ### Deleting an index for a relationship type For a relationship type, the index deletion syntax is: ``` GRAPH.QUERY DEMO\_GRAPH "DROP INDEX ON :FOLLOW(created\_at)" ``` Full-text indexing ------------------ RedisGraph leverages the indexing capabilities of [RediSearch](https://redis.io/docs/stack/search/index.html) to provide full-text indices through procedure calls. ### Creating a full-text index for a node label To construct a full-text index on the `title` property of all nodes with label `Movie`, use the syntax: ``` GRAPH.QUERY DEMO\_GRAPH "CALL db.idx.fulltext.createNodeIndex('Movie', 'title')" ``` More properties can be added to this index by adding their names to the above set of arguments, or using this syntax again with the additional names. ``` GRAPH.QUERY DEMO\_GRAPH "CALL db.idx.fulltext.createNodeIndex('Person', 'firstName', 'lastName')" ``` RediSearch provide 2 index configuration options: 1. Language - Define which language to use for stemming text which is adding the base form of a word to the index. This allows the query for "going" to also return results for "go" and "gone", for example. 2. Stopwords - These are words that are usually so common that they do not add much information to search, but take up a lot of space and CPU time in the index. To construct a full-text index on the `title` property using `German` language and using custom stopwords of all nodes with label `Movie`, use the syntax: ``` GRAPH.QUERY DEMO\_GRAPH "CALL db.idx.fulltext.createNodeIndex({ label: 'Movie', language: 'German', stopwords: ['a', 'ab'] }, 'title')" ``` RediSearch provide 3 additional field configuration options: 1. Weight - The importance of the text in the field 2. Nostem - Skip stemming when indexing text 3. Phonetic - Enable phonetic search on the text To construct a full-text index on the `title` property with phonetic search of all nodes with label `Movie`, use the syntax: ``` GRAPH.QUERY DEMO\_GRAPH "CALL db.idx.fulltext.createNodeIndex('Movie', {field: 'title', phonetic: 'dm:en'})" ``` ### Utilizing a full-text index for a node label An index can be invoked to match any whole words contained within: ``` GRAPH.QUERY DEMO\_GRAPH "CALL db.idx.fulltext.queryNodes('Movie', 'Book') YIELD node RETURN node.title" 1) 1) "node.title" 2) 1) 1) "The Jungle Book" 2) 1) "The Book of Life" 3) 1) "Query internal execution time: 0.927409 milliseconds" ``` This CALL clause can be interleaved with other Cypher clauses to perform more elaborate manipulations: ``` GRAPH.QUERY DEMO\_GRAPH "CALL db.idx.fulltext.queryNodes('Movie', 'Book') YIELD node AS m WHERE m.genre = 'Adventure' RETURN m ORDER BY m.rating" 1) 1) "m" 2) 1) 1) 1) 1) "id" 2) (integer) 1168 2) 1) "labels" 2) 1) "Movie" 3) 1) "properties" 2) 1) 1) "genre" 2) "Adventure" 2) 1) "rating" 2) "7.6" 3) 1) "votes" 2) (integer) 151342 4) 1) "year" 2) (integer) 2016 5) 1) "title" 2) "The Jungle Book" 3) 1) "Query internal execution time: 0.226914 milliseconds" ``` In addition to yielding matching nodes, full-text index scans will return the score of each node. This is the [TF-IDF](https://redis.io/docs/stack/search/reference/scoring/#tfidf-default) score of the node, which is informed by how many times the search terms appear in the node and how closely grouped they are. This can be observed in the example: ``` GRAPH.QUERY DEMO\_GRAPH "CALL db.idx.fulltext.queryNodes('Node', 'hello world') YIELD node, score RETURN score, node.val" 1) 1) "score" 2) "node.val" 2) 1) 1) "2" 2) "hello world" 2) 1) "1" 2) "hello to a different world" 3) 1) "Cached execution: 1" 2) "Query internal execution time: 0.335401 milliseconds" ``` ### Deleting a full-text index for a node label For a node label, the full-text index deletion syntax is: ``` GRAPH.QUERY DEMO_GRAPH "CALL db.idx.fulltext.drop('Movie')" ```
programming_docs
redis PSUBSCRIBE PSUBSCRIBE ========== ``` PSUBSCRIBE ``` Syntax ``` PSUBSCRIBE pattern [pattern ...] ``` Available since: 2.0.0 Time complexity: O(N) where N is the number of patterns the client is already subscribed to. ACL categories: `@pubsub`, `@slow`, Subscribes the client to the given patterns. Supported glob-style patterns: * `h?llo` subscribes to `hello`, `hallo` and `hxllo` * `h*llo` subscribes to `hllo` and `heeeello` * `h[ae]llo` subscribes to `hello` and `hallo,` but not `hillo` Use `\` to escape special characters if you want to match them verbatim. Once the client enters the subscribed state it is not supposed to issue any other commands, except for additional [`SUBSCRIBE`](../subscribe), [`SSUBSCRIBE`](../ssubscribe), `PSUBSCRIBE`, [`UNSUBSCRIBE`](../unsubscribe), [`SUNSUBSCRIBE`](../sunsubscribe), [`PUNSUBSCRIBE`](../punsubscribe), [`PING`](../ping), [`RESET`](../reset) and [`QUIT`](../quit) commands. However, if RESP3 is used (see [`HELLO`](../hello)) it is possible for a client to issue any commands while in subscribed state. For more information, see [Pub/sub](https://redis.io/docs/manual/pubsub/). Return ------ When successful, this command doesn't return anything. Instead, for each pattern, one message with the first element being the string "psubscribe" is pushed as a confirmation that the command succeeded. Behavior change history ----------------------- * `>= 6.2.0`: [`RESET`](../reset) can be called to exit subscribed state. redis PSETEX PSETEX ====== ``` PSETEX (deprecated) ``` As of Redis version 2.6.12, this command is regarded as deprecated. It can be replaced by [`SET`](../set) with the `PX` argument when migrating or writing new code. Syntax ``` PSETEX key milliseconds value ``` Available since: 2.6.0 Time complexity: O(1) ACL categories: `@write`, `@string`, `@slow`, `PSETEX` works exactly like [`SETEX`](../setex) with the sole difference that the expire time is specified in milliseconds instead of seconds. Examples -------- ``` PSETEX mykey 1000 "Hello" PTTL mykey GET mykey ``` redis ZRANGE ZRANGE ====== ``` ZRANGE ``` Syntax ``` ZRANGE key start stop [BYSCORE | BYLEX] [REV] [LIMIT offset count] [WITHSCORES] ``` Available since: 1.2.0 Time complexity: O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements returned. ACL categories: `@read`, `@sortedset`, `@slow`, Returns the specified range of elements in the sorted set stored at `<key>`. `ZRANGE` can perform different types of range queries: by index (rank), by the score, or by lexicographical order. Starting with Redis 6.2.0, this command can replace the following commands: [`ZREVRANGE`](../zrevrange), [`ZRANGEBYSCORE`](../zrangebyscore), [`ZREVRANGEBYSCORE`](../zrevrangebyscore), [`ZRANGEBYLEX`](../zrangebylex) and [`ZREVRANGEBYLEX`](../zrevrangebylex). Common behavior and options --------------------------- The order of elements is from the lowest to the highest score. Elements with the same score are ordered lexicographically. The optional `REV` argument reverses the ordering, so elements are ordered from highest to lowest score, and score ties are resolved by reverse lexicographical ordering. The optional `LIMIT` argument can be used to obtain a sub-range from the matching elements (similar to *SELECT LIMIT offset, count* in SQL). A negative `<count>` returns all elements from the `<offset>`. Keep in mind that if `<offset>` is large, the sorted set needs to be traversed for `<offset>` elements before getting to the elements to return, which can add up to O(N) time complexity. The optional `WITHSCORES` argument supplements the command's reply with the scores of elements returned. The returned list contains `value1,score1,...,valueN,scoreN` instead of `value1,...,valueN`. Client libraries are free to return a more appropriate data type (suggestion: an array with (value, score) arrays/tuples). Index ranges ------------ By default, the command performs an index range query. The `<start>` and `<stop>` arguments represent zero-based indexes, where `0` is the first element, `1` is the next element, and so on. These arguments specify an **inclusive range**, so for example, `ZRANGE myzset 0 1` will return both the first and the second element of the sorted set. The indexes can also be negative numbers indicating offsets from the end of the sorted set, with `-1` being the last element of the sorted set, `-2` the penultimate element, and so on. Out of range indexes do not produce an error. If `<start>` is greater than either the end index of the sorted set or `<stop>`, an empty list is returned. If `<stop>` is greater than the end index of the sorted set, Redis will use the last element of the sorted set. Score ranges ------------ When the `BYSCORE` option is provided, the command behaves like [`ZRANGEBYSCORE`](../zrangebyscore) and returns the range of elements from the sorted set having scores equal or between `<start>` and `<stop>`. `<start>` and `<stop>` can be `-inf` and `+inf`, denoting the negative and positive infinities, respectively. This means that you are not required to know the highest or lowest score in the sorted set to get all elements from or up to a certain score. By default, the score intervals specified by `<start>` and `<stop>` are closed (inclusive). It is possible to specify an open interval (exclusive) by prefixing the score with the character `(`. For example: ``` ZRANGE zset (1 5 BYSCORE ``` Will return all elements with `1 < score <= 5` while: ``` ZRANGE zset (5 (10 BYSCORE ``` Will return all the elements with `5 < score < 10` (5 and 10 excluded). Reverse ranges -------------- Using the `REV` option reverses the sorted set, with index 0 as the element with the highest score. By default, `<start>` must be less than or equal to `<stop>` to return anything. However, if the `BYSCORE`, or `BYLEX` options are selected, the `<start>` is the highest score to consider, and `<stop>` is the lowest score to consider, therefore `<start>` must be greater than or equal to `<stop>` in order to return anything. For example: ``` ZRANGE zset 5 10 REV ``` Will return the elements between index 5 and 10 in the reversed index. ``` ZRANGE zset 10 5 REV BYSCORE ``` Will return all elements with scores less than 10 and greater than 5. Lexicographical ranges ---------------------- When the `BYLEX` option is used, the command behaves like [`ZRANGEBYLEX`](../zrangebylex) and returns the range of elements from the sorted set between the `<start>` and `<stop>` lexicographical closed range intervals. Note that lexicographical ordering relies on all elements having the same score. The reply is unspecified when the elements have different scores. Valid `<start>` and `<stop>` must start with `(` or `[`, in order to specify whether the range interval is exclusive or inclusive, respectively. The special values of `+` or `-` for `<start>` and `<stop>` mean positive and negative infinite strings, respectively, so for instance the command `ZRANGE myzset - + BYLEX` is guaranteed to return all the elements in the sorted set, providing that all the elements have the same score. The `REV` options reverses the order of the `<start>` and `<stop>` elements, where `<start>` must be lexicographically greater than `<stop>` to produce a non-empty result. ### Lexicographical comparison of strings Strings are compared as a binary array of bytes. Because of how the ASCII character set is specified, this means that usually this also have the effect of comparing normal ASCII characters in an obvious dictionary way. However, this is not true if non-plain ASCII strings are used (for example, utf8 strings). However, the user can apply a transformation to the encoded string so that the first part of the element inserted in the sorted set will compare as the user requires for the specific application. For example, if I want to add strings that will be compared in a case-insensitive way, but I still want to retrieve the real case when querying, I can add strings in the following way: ``` ZADD autocomplete 0 foo:Foo 0 bar:BAR 0 zap:zap ``` Because of the first *normalized* part in every element (before the colon character), we are forcing a given comparison. However, after the range is queried using `ZRANGE ... BYLEX`, the application can display to the user the second part of the string, after the colon. The binary nature of the comparison allows to use sorted sets as a general purpose index, for example, the first part of the element can be a 64-bit big-endian number. Since big-endian numbers have the most significant bytes in the initial positions, the binary comparison will match the numerical comparison of the numbers. This can be used in order to implement range queries on 64-bit values. As in the example below, after the first 8 bytes, we can store the value of the element we are indexing. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays): list of elements in the specified range (optionally with their scores, in case the `WITHSCORES` option is given). Examples -------- ``` ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZRANGE myzset 0 -1 ZRANGE myzset 2 3 ZRANGE myzset -2 -1 ``` The following example using `WITHSCORES` shows how the command returns always an array, but this time, populated with *element\_1*, *score\_1*, *element\_2*, *score\_2*, ..., *element\_N*, *score\_N*. ``` ZRANGE myzset 0 1 WITHSCORES ``` This example shows how to query the sorted set by score, excluding the value `1` and up to infinity, returning only the second element of the result: ``` ZRANGE myzset (1 +inf BYSCORE LIMIT 1 1 ``` History ------- * Starting with Redis version 6.2.0: Added the `REV`, `BYSCORE`, `BYLEX` and `LIMIT` options. redis GEOPOS GEOPOS ====== ``` GEOPOS ``` Syntax ``` GEOPOS key [member [member ...]] ``` Available since: 3.2.0 Time complexity: O(N) where N is the number of members requested. ACL categories: `@read`, `@geo`, `@slow`, Return the positions (longitude,latitude) of all the specified members of the geospatial index represented by the sorted set at *key*. Given a sorted set representing a geospatial index, populated using the [`GEOADD`](../geoadd) command, it is often useful to obtain back the coordinates of specified members. When the geospatial index is populated via [`GEOADD`](../geoadd) the coordinates are converted into a 52 bit geohash, so the coordinates returned may not be exactly the ones used in order to add the elements, but small errors may be introduced. The command can accept a variable number of arguments so it always returns an array of positions even when a single element is specified. Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays), specifically: The command returns an array where each element is a two elements array representing longitude and latitude (x,y) of each member name passed as argument to the command. Non existing elements are reported as NULL elements of the array. Examples -------- ``` GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" GEOPOS Sicily Palermo Catania NonExisting ``` redis LPUSH LPUSH ===== ``` LPUSH ``` Syntax ``` LPUSH key element [element ...] ``` Available since: 1.0.0 Time complexity: O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments. ACL categories: `@write`, `@list`, `@fast`, Insert all the specified values at the head of the list stored at `key`. If `key` does not exist, it is created as empty list before performing the push operations. When `key` holds a value that is not a list, an error is returned. It is possible to push multiple elements using a single command call just specifying multiple arguments at the end of the command. Elements are inserted one after the other to the head of the list, from the leftmost element to the rightmost element. So for instance the command `LPUSH mylist a b c` will result into a list containing `c` as first element, `b` as second element and `a` as third element. Return ------ [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers): the length of the list after the push operations. Examples -------- ``` LPUSH mylist "world" LPUSH mylist "hello" LRANGE mylist 0 -1 ``` History ------- * Starting with Redis version 2.4.0: Accepts multiple `element` arguments. redis FT.CONFIG FT.CONFIG ========= ``` FT.CONFIG SET ``` Syntax ``` FT.CONFIG SET option value ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Search 1.0.0](https://redis.io/docs/stack/search) Time complexity: O(1) Set the value of a RediSearch configuration parameter. Values set using `FT.CONFIG SET` are not persisted after server restart. RediSearch configuration parameters are detailed in [Configuration parameters](https://redis.io/docs/stack/search/configuring). Note As detailed in the link above, not all RediSearch configuration parameters can be set at runtime. [Examples](#examples) Required arguments ------------------ `option` is name of the configuration option, or '\*' for all. `value` is value of the configuration option. Return ------ FT.CONFIG SET returns a simple string reply `OK` if executed correctly, or an error reply otherwise. Examples -------- **Set runtime configuration options** ``` 127.0.0.1:6379> FT.CONFIG SET TIMEOUT 42 OK ``` See also -------- [`FT.CONFIG GET`](../ft.config-get) | [`FT.CONFIG HELP`](../ft.config-help) Related topics -------------- [RediSearch](https://redis.io/docs/stack/search) redis BF.SCANDUMP BF.SCANDUMP =========== ``` BF.SCANDUMP ``` Syntax ``` BF.SCANDUMP key iterator ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [Bloom 1.0.0](https://redis.io/docs/stack/bloom) Time complexity: O(n), where n is the capacity Begins an incremental save of the bloom filter. This is useful for large bloom filters which cannot fit into the normal [`DUMP`](../dump) and [`RESTORE`](../restore) model. The first time this command is called, the value of `iter` should be 0. This command returns successive `(iter, data)` pairs until `(0, NULL)` to indicate completion. ### Parameters * **key**: Name of the filter * **iter**: Iterator value; either 0 or the iterator from a previous invocation of this command Return ------ [Array reply](https://redis.io/docs/reference/protocol-spec#resp-arrays) of [Integer reply](https://redis.io/docs/reference/protocol-spec#resp-integers) (*Iterator*) and [] (*Data*). The Iterator is passed as input to the next invocation of `SCANDUMP`. If *Iterator* is 0, then it means iteration has completed. The iterator-data pair should also be passed to `LOADCHUNK` when restoring the filter. @example ``` redis> BF.RESERVE bf 0.1 10 OK redis> BF.ADD bf item1 1) (integer) 1 redis> BF.SCANDUMP bf 0 1) (integer) 1 2) "\x01\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x05\x00\x00\x00\x02\x00\x00\x00\b\x00\x00\x00\x00\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x9a\x99\x99\x99\x99\x99\xa9?J\xf7\xd4\x9e\xde\xf0\x18@\x05\x00\x00\x00\n\x00\x00\x00\x00\x00\x00\x00\x00" redis> BF.SCANDUMP bf 1 1) (integer) 9 2) "\x01\b\x00\x80\x00\x04 \x00" redis> BF.SCANDUMP bf 9 1) (integer) 0 2) "" redis> FLUSHALL OK redis> BF.LOADCHUNK bf 1 "\x01\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x05\x00\x00\x00\x02\x00\x00\x00\b\x00\x00\x00\x00\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x9a\x99\x99\x99\x99\x99\xa9?J\xf7\xd4\x9e\xde\xf0\x18@\x05\x00\x00\x00\n\x00\x00\x00\x00\x00\x00\x00\x00" OK redis> BF.LOADCHUNK bf 9 "\x01\b\x00\x80\x00\x04 \x00" OK redis> BF.EXISTS bf item1 (integer) 1 ``` python code: ``` chunks = [] iter = 0 while True: iter, data = BF.SCANDUMP(key, iter) if iter == 0: break else: chunks.append([iter, data]) # Load it back for chunk in chunks: iter, data = chunk BF.LOADCHUNK(key, iter, data) ``` redis JSON.FORGET JSON.FORGET =========== ``` JSON.FORGET ``` Syntax ``` JSON.FORGET key [path] ``` Available in: [Redis Stack](https://redis.io/docs/stack) / [JSON 1.0.0](https://redis.io/docs/stack/json) Time complexity: O(N) when path is evaluated to a single value where N is the size of the deleted value, O(N) when path is evaluated to multiple values, where N is the size of the key See [`JSON.DEL`](../json.del). redis XGROUP XGROUP ====== ``` XGROUP SETID ``` Syntax ``` XGROUP SETID key group <id | $> [ENTRIESREAD entries-read] ``` Available since: 5.0.0 Time complexity: O(1) ACL categories: `@write`, `@stream`, `@slow`, Set the **last delivered ID** for a consumer group. Normally, a consumer group's last delivered ID is set when the group is created with [`XGROUP CREATE`](../xgroup-create). The `XGROUP SETID` command allows modifying the group's last delivered ID, without having to delete and recreate the group. For instance if you want the consumers in a consumer group to re-process all the messages in a stream, you may want to set its next ID to 0: ``` XGROUP SETID mystream mygroup 0 ``` The optional `entries_read` argument can be specified to enable consumer group lag tracking for an arbitrary ID. An arbitrary ID is any ID that isn't the ID of the stream's first entry, its last entry or the zero ("0-0") ID. This can be useful you know exactly how many entries are between the arbitrary ID (excluding it) and the stream's last entry. In such cases, the `entries_read` can be set to the stream's `entries_added` subtracted with the number of entries. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` on success. History ------- * Starting with Redis version 7.0.0: Added the optional `entries_read` argument. redis CLUSTER CLUSTER ======= ``` CLUSTER SAVECONFIG ``` Syntax ``` CLUSTER SAVECONFIG ``` Available since: 3.0.0 Time complexity: O(1) ACL categories: `@admin`, `@slow`, `@dangerous`, Forces a node to save the `nodes.conf` configuration on disk. Before to return the command calls `fsync(2)` in order to make sure the configuration is flushed on the computer disk. This command is mainly used in the event a `nodes.conf` node state file gets lost / deleted for some reason, and we want to generate it again from scratch. It can also be useful in case of mundane alterations of a node cluster configuration via the [`CLUSTER`](../cluster) command in order to ensure the new configuration is persisted on disk, however all the commands should normally be able to auto schedule to persist the configuration on disk when it is important to do so for the correctness of the system in the event of a restart. Return ------ [Simple string reply](https://redis.io/docs/reference/protocol-spec#resp-simple-strings): `OK` or an error if the operation fails. underscore Underscore.js Underscore.js ============= Collection Functions (Arrays or Objects) ---------------------------------------- **each**`_.each(list, iteratee, [context])` Alias: **forEach** Iterates over a **list** of elements, yielding each in turn to an **iteratee** function. The **iteratee** is bound to the **context** object, if one is passed. Each invocation of **iteratee** is called with three arguments: (element, index, list). If **list** is a JavaScript object, **iteratee**'s arguments will be (value, key, list). Returns the **list** for chaining. ``` _.each([1, 2, 3], alert); => alerts each number in turn... _.each({one: 1, two: 2, three: 3}, alert); => alerts each number value in turn... ``` *Note: Collection functions work on arrays, objects, and array-like objects such as* arguments, NodeList *and similar. But it works by duck-typing, so avoid passing objects with a numeric length property. It's also good to note that an each loop cannot be broken out of — to break, use **\_.find** instead.* **map**`_.map(list, iteratee, [context])` Alias: **collect** Produces a new array of values by mapping each value in **list** through a transformation function ([**iteratee**](#iteratee)). The iteratee is passed three arguments: the value, then the index (or key) of the iteration, and finally a reference to the entire list. ``` _.map([1, 2, 3], function(num){ return num * 3; }); => [3, 6, 9] _.map({one: 1, two: 2, three: 3}, function(num, key){ return num * 3; }); => [3, 6, 9] _.map([[1, 2], [3, 4]], _.first); => [1, 3] ``` **reduce**`_.reduce(list, iteratee, [memo], [context])` Aliases: **inject**, **foldl** Also known as **inject** and **foldl**, reduce boils down a **list** of values into a single value. **Memo** is the initial state of the reduction, and each successive step of it should be returned by **iteratee**. The iteratee is passed four arguments: the memo, then the value and index (or key) of the iteration, and finally a reference to the entire list. If no memo is passed to the initial invocation of reduce, the iteratee is not invoked on the first element of the list. The first element is instead passed as the memo in the invocation of the iteratee on the next element in the list. ``` var sum = _.reduce([1, 2, 3], function(memo, num){ return memo + num; }, 0); => 6 ``` **reduceRight**`_.reduceRight(list, iteratee, [memo], [context])` Alias: **foldr** The right-associative version of **reduce**. **Foldr** is not as useful in JavaScript as it would be in a language with lazy evaluation. ``` var list = [[0, 1], [2, 3], [4, 5]]; var flat = _.reduceRight(list, function(a, b) { return a.concat(b); }, []); => [4, 5, 2, 3, 0, 1] ``` **find**`_.find(list, predicate, [context])` Alias: **detect** Looks through each value in the **list**, returning the first one that passes a truth test (**predicate**), or undefined if no value passes the test. The function returns as soon as it finds an acceptable element, and doesn't traverse the entire list. **predicate** is transformed through [**iteratee**](#iteratee) to facilitate shorthand syntaxes. ``` var even = _.find([1, 2, 3, 4, 5, 6], function(num){ return num % 2 == 0; }); => 2 ``` **filter**`_.filter(list, predicate, [context])` Alias: **select** Looks through each value in the **list**, returning an array of all the values that pass a truth test (**predicate**). **predicate** is transformed through [**iteratee**](#iteratee) to facilitate shorthand syntaxes. ``` var evens = _.filter([1, 2, 3, 4, 5, 6], function(num){ return num % 2 == 0; }); => [2, 4, 6] ``` **findWhere**`_.findWhere(list, properties)` Looks through the **list** and returns the *first* value that [matches](#matches) all of the key-value pairs listed in **properties**. If no match is found, or if **list** is empty, *undefined* will be returned. ``` _.findWhere(publicServicePulitzers, {newsroom: "The New York Times"}); => {year: 1918, newsroom: "The New York Times", reason: "For its public service in publishing in full so many official reports, documents and speeches by European statesmen relating to the progress and conduct of the war."} ``` **where**`_.where(list, properties)` Looks through each value in the **list**, returning an array of all the values that [matches](#matches) the key-value pairs listed in **properties**. ``` _.where(listOfPlays, {author: "Shakespeare", year: 1611}); => [{title: "Cymbeline", author: "Shakespeare", year: 1611}, {title: "The Tempest", author: "Shakespeare", year: 1611}] ``` **reject**`_.reject(list, predicate, [context])` Returns the values in **list** without the elements that the truth test (**predicate**) passes. The opposite of **filter**. **predicate** is transformed through [**iteratee**](#iteratee) to facilitate shorthand syntaxes. ``` var odds = _.reject([1, 2, 3, 4, 5, 6], function(num){ return num % 2 == 0; }); => [1, 3, 5] ``` **every**`_.every(list, [predicate], [context])` Alias: **all** Returns *true* if all of the values in the **list** pass the **predicate** truth test. Short-circuits and stops traversing the list if a false element is found. **predicate** is transformed through [**iteratee**](#iteratee) to facilitate shorthand syntaxes. ``` _.every([2, 4, 5], function(num) { return num % 2 == 0; }); => false ``` **some**`_.some(list, [predicate], [context])` Alias: **any** Returns *true* if any of the values in the **list** pass the **predicate** truth test. Short-circuits and stops traversing the list if a true element is found. **predicate** is transformed through [**iteratee**](#iteratee) to facilitate shorthand syntaxes. ``` _.some([null, 0, 'yes', false]); => true ``` **contains**`_.contains(list, value, [fromIndex])` Aliases: **include**, **includes** Returns *true* if the **value** is present in the **list**. Uses **indexOf** internally, if **list** is an Array. Use **fromIndex** to start your search at a given index. ``` _.contains([1, 2, 3], 3); => true ``` **invoke**`_.invoke(list, methodName, *arguments)` Calls the method named by **methodName** on each value in the **list**. Any extra arguments passed to **invoke** will be forwarded on to the method invocation. ``` _.invoke([[5, 1, 7], [3, 2, 1]], 'sort'); => [[1, 5, 7], [1, 2, 3]] ``` **pluck**`_.pluck(list, propertyName)` A convenient version of what is perhaps the most common use-case for **map**: extracting a list of property values. ``` var stooges = [{name: 'moe', age: 40}, {name: 'larry', age: 50}, {name: 'curly', age: 60}]; _.pluck(stooges, 'name'); => ["moe", "larry", "curly"] ``` **max**`_.max(list, [iteratee], [context])` Returns the maximum value in **list**. If an [**iteratee**](#iteratee) function is provided, it will be used on each value to generate the criterion by which the value is ranked. *-Infinity* is returned if **list** is empty, so an [isEmpty](#isEmpty) guard may be required. This function can currently only compare numbers reliably. This function uses operator < ([note](#relational-operator-note)). ``` var stooges = [{name: 'moe', age: 40}, {name: 'larry', age: 50}, {name: 'curly', age: 60}]; _.max(stooges, function(stooge){ return stooge.age; }); => {name: 'curly', age: 60}; ``` **min**`_.min(list, [iteratee], [context])` Returns the minimum value in **list**. If an [**iteratee**](#iteratee) function is provided, it will be used on each value to generate the criterion by which the value is ranked. *Infinity* is returned if **list** is empty, so an [isEmpty](#isEmpty) guard may be required. This function can currently only compare numbers reliably. This function uses operator < ([note](#relational-operator-note)). ``` var numbers = [10, 5, 100, 2, 1000]; _.min(numbers); => 2 ``` **sortBy**`_.sortBy(list, iteratee, [context])` Returns a (stably) sorted copy of **list**, ranked in ascending order by the results of running each value through [**iteratee**](#iteratee). iteratee may also be the string name of the property to sort by (eg. length). This function uses operator < ([note](#relational-operator-note)). ``` _.sortBy([1, 2, 3, 4, 5, 6], function(num){ return Math.sin(num); }); => [5, 4, 6, 3, 1, 2] var stooges = [{name: 'moe', age: 40}, {name: 'larry', age: 50}, {name: 'curly', age: 60}]; _.sortBy(stooges, 'name'); => [{name: 'curly', age: 60}, {name: 'larry', age: 50}, {name: 'moe', age: 40}]; ``` **groupBy**`_.groupBy(list, iteratee, [context])` Splits a collection into sets, grouped by the result of running each value through **iteratee**. If **iteratee** is a string instead of a function, groups by the property named by **iteratee** on each of the values. ``` _.groupBy([1.3, 2.1, 2.4], function(num){ return Math.floor(num); }); => {1: [1.3], 2: [2.1, 2.4]} _.groupBy(['one', 'two', 'three'], 'length'); => {3: ["one", "two"], 5: ["three"]} ``` **indexBy**`_.indexBy(list, iteratee, [context])` Given a **list**, and an [**iteratee**](#iteratee) function that returns a key for each element in the list (or a property name), returns an object with an index of each item. Just like [groupBy](#groupBy), but for when you know your keys are unique. ``` var stooges = [{name: 'moe', age: 40}, {name: 'larry', age: 50}, {name: 'curly', age: 60}]; _.indexBy(stooges, 'age'); => { "40": {name: 'moe', age: 40}, "50": {name: 'larry', age: 50}, "60": {name: 'curly', age: 60} } ``` **countBy**`_.countBy(list, iteratee, [context])` Sorts a list into groups and returns a count for the number of objects in each group. Similar to groupBy, but instead of returning a list of values, returns a count for the number of values in that group. ``` _.countBy([1, 2, 3, 4, 5], function(num) { return num % 2 == 0 ? 'even': 'odd'; }); => {odd: 3, even: 2} ``` **shuffle**`_.shuffle(list)` Returns a shuffled copy of the **list**, using a version of the [Fisher-Yates shuffle](https://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle). ``` _.shuffle([1, 2, 3, 4, 5, 6]); => [4, 1, 6, 3, 5, 2] ``` **sample**`_.sample(list, [n])` Produce a random sample from the **list**. Pass a number to return **n** random elements from the list. Otherwise a single random item will be returned. ``` _.sample([1, 2, 3, 4, 5, 6]); => 4 _.sample([1, 2, 3, 4, 5, 6], 3); => [1, 6, 2] ``` **toArray**`_.toArray(list)` Creates a real Array from the **list** (anything that can be iterated over). Useful for transmuting the **arguments** object. ``` (function(){ return _.toArray(arguments).slice(1); })(1, 2, 3, 4); => [2, 3, 4] ``` **size**`_.size(list)` Return the number of values in the **list**. ``` _.size([1, 2, 3, 4, 5]); => 5 _.size({one: 1, two: 2, three: 3}); => 3 ``` **partition**`_.partition(list, predicate)` Split **list** into two arrays: one whose elements all satisfy **predicate** and one whose elements all do not satisfy **predicate**. **predicate** is transformed through [**iteratee**](#iteratee) to facilitate shorthand syntaxes. ``` _.partition([0, 1, 2, 3, 4, 5], isOdd); => [[1, 3, 5], [0, 2, 4]] ``` **compact**`_.compact(list)` Returns a copy of the **list** with all falsy values removed. In JavaScript, *false*, *null*, *0*, *""*, *undefined* and *NaN* are all falsy. ``` _.compact([0, 1, false, 2, '', 3]); => [1, 2, 3] ``` Array Functions --------------- *Note: All array functions will also work on the **arguments** object. However, Underscore functions are not designed to work on "sparse" arrays.* **first**`_.first(array, [n])` Aliases: **head**, **take** Returns the first element of an **array**. Passing **n** will return the first **n** elements of the array. ``` _.first([5, 4, 3, 2, 1]); => 5 ``` **initial**`_.initial(array, [n])` Returns everything but the last entry of the array. Especially useful on the arguments object. Pass **n** to exclude the last **n** elements from the result. ``` _.initial([5, 4, 3, 2, 1]); => [5, 4, 3, 2] ``` **last**`_.last(array, [n])` Returns the last element of an **array**. Passing **n** will return the last **n** elements of the array. ``` _.last([5, 4, 3, 2, 1]); => 1 ``` **rest**`_.rest(array, [index])` Aliases: **tail**, **drop** Returns the **rest** of the elements in an array. Pass an **index** to return the values of the array from that index onward. ``` _.rest([5, 4, 3, 2, 1]); => [4, 3, 2, 1] ``` **flatten**`_.flatten(array, [depth])` Flattens a nested **array**. If you pass true or 1 as the **depth**, the array will only be flattened a single level. Passing a greater number will cause the flattening to descend deeper into the nesting hierarchy. Omitting the **depth** argument, or passing false or Infinity, flattens the array all the way to the deepest nesting level. ``` _.flatten([1, [2], [3, [[4]]]]); => [1, 2, 3, 4]; _.flatten([1, [2], [3, [[4]]]], true); => [1, 2, 3, [[4]]]; _.flatten([1, [2], [3, [[4]]]], 2); => [1, 2, 3, [4]]; ``` **without**`_.without(array, *values)` Returns a copy of the **array** with all instances of the **values** removed. ``` _.without([1, 2, 1, 0, 3, 1, 4], 0, 1); => [2, 3, 4] ``` **union**`_.union(*arrays)` Computes the union of the passed-in **arrays**: the list of unique items, in order, that are present in one or more of the **arrays**. ``` _.union([1, 2, 3], [101, 2, 1, 10], [2, 1]); => [1, 2, 3, 101, 10] ``` **intersection**`_.intersection(*arrays)` Computes the list of values that are the intersection of all the **arrays**. Each value in the result is present in each of the **arrays**. ``` _.intersection([1, 2, 3], [101, 2, 1, 10], [2, 1]); => [1, 2] ``` **difference**`_.difference(array, *others)` Similar to **without**, but returns the values from **array** that are not present in the **other** arrays. ``` _.difference([1, 2, 3, 4, 5], [5, 2, 10]); => [1, 3, 4] ``` **uniq**`_.uniq(array, [isSorted], [iteratee])` Alias: **unique** Produces a duplicate-free version of the **array**, using *===* to test object equality. In particular only the first occurrence of each value is kept. If you know in advance that the **array** is sorted, passing *true* for **isSorted** will run a much faster algorithm. If you want to compute unique items based on a transformation, pass an [**iteratee**](#iteratee) function. ``` _.uniq([1, 2, 1, 4, 1, 3]); => [1, 2, 4, 3] ``` **zip**`_.zip(*arrays)` Merges together the values of each of the **arrays** with the values at the corresponding position. Useful when you have separate data sources that are coordinated through matching array indexes. ``` _.zip(['moe', 'larry', 'curly'], [30, 40, 50], [true, false, false]); => [["moe", 30, true], ["larry", 40, false], ["curly", 50, false]] ``` **unzip**`_.unzip(array)` Alias: **transpose** The opposite of [zip](#zip). Given an **array** of arrays, returns a series of new arrays, the first of which contains all of the first elements in the input arrays, the second of which contains all of the second elements, and so on. If you're working with a matrix of nested arrays, this can be used to transpose the matrix. ``` _.unzip([["moe", 30, true], ["larry", 40, false], ["curly", 50, false]]); => [['moe', 'larry', 'curly'], [30, 40, 50], [true, false, false]] ``` **object**`_.object(list, [values])` Converts arrays into objects. Pass either a single list of [key, value] pairs, or a list of keys, and a list of values. Passing by pairs is the reverse of [pairs](#pairs). If duplicate keys exist, the last value wins. ``` _.object(['moe', 'larry', 'curly'], [30, 40, 50]); => {moe: 30, larry: 40, curly: 50} _.object([['moe', 30], ['larry', 40], ['curly', 50]]); => {moe: 30, larry: 40, curly: 50} ``` **chunk**`_.chunk(array, length)` Chunks an **array** into multiple arrays, each containing **length** or fewer items. ``` var partners = _.chunk(_.shuffle(kindergarten), 2); => [["Tyrone", "Elie"], ["Aidan", "Sam"], ["Katrina", "Billie"], ["Little Timmy"]] ``` **indexOf**`_.indexOf(array, value, [isSorted])` Returns the index at which **value** can be found in the **array**, or *-1* if value is not present in the **array**. If you're working with a large array, and you know that the array is already sorted, pass true for **isSorted** to use a faster binary search ... or, pass a number as the third argument in order to look for the first matching value in the array after the given index. If isSorted is true, this function uses operator < ([note](#relational-operator-note)). ``` _.indexOf([1, 2, 3], 2); => 1 ``` **lastIndexOf**`_.lastIndexOf(array, value, [fromIndex])` Returns the index of the last occurrence of **value** in the **array**, or *-1* if value is not present. Pass **fromIndex** to start your search at a given index. ``` _.lastIndexOf([1, 2, 3, 1, 2, 3], 2); => 4 ``` **sortedIndex**`_.sortedIndex(array, value, [iteratee], [context])` Uses a binary search to determine the smallest index at which the **value** *should* be inserted into the **array** in order to maintain the **array**'s sorted order. If an [**iteratee**](#iteratee) function is provided, it will be used to compute the sort ranking of each value, including the **value** you pass. The iteratee may also be the string name of the property to sort by (eg. length). This function uses operator < ([note](#relational-operator-note)). ``` _.sortedIndex([10, 20, 30, 40, 50], 35); => 3 var stooges = [{name: 'moe', age: 40}, {name: 'curly', age: 60}]; _.sortedIndex(stooges, {name: 'larry', age: 50}, 'age'); => 1 ``` **findIndex**`_.findIndex(array, predicate, [context])` Similar to [\_.indexOf](#indexOf), returns the first index where the **predicate** truth test passes; otherwise returns *-1*. ``` _.findIndex([4, 6, 8, 12], isPrime); => -1 // not found _.findIndex([4, 6, 7, 12], isPrime); => 2 ``` **findLastIndex**`_.findLastIndex(array, predicate, [context])` Like [\_.findIndex](#findIndex) but iterates the array in reverse, returning the index closest to the end where the **predicate** truth test passes. ``` var users = [{'id': 1, 'name': 'Bob', 'last': 'Brown'}, {'id': 2, 'name': 'Ted', 'last': 'White'}, {'id': 3, 'name': 'Frank', 'last': 'James'}, {'id': 4, 'name': 'Ted', 'last': 'Jones'}]; _.findLastIndex(users, { name: 'Ted' }); => 3 ``` **range**`_.range([start], stop, [step])` A function to create flexibly-numbered lists of integers, handy for each and map loops. **start**, if omitted, defaults to *0*; **step** defaults to *1*. Returns a list of integers from **start** (inclusive) to **stop** (exclusive), incremented (or decremented) by **step**. Note that ranges that **stop** before they **start** are considered to be zero-length instead of negative — if you'd like a negative range, use a negative **step**. ``` _.range(10); => [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] _.range(1, 11); => [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] _.range(0, 30, 5); => [0, 5, 10, 15, 20, 25] _.range(0, -10, -1); => [0, -1, -2, -3, -4, -5, -6, -7, -8, -9] _.range(0); => [] ``` Function (uh, ahem) Functions ----------------------------- **bind**`_.bind(function, object, *arguments)` Bind a **function** to an **object**, meaning that whenever the function is called, the value of *this* will be the **object**. Optionally, pass **arguments** to the **function** to pre-fill them, also known as **partial application**. For partial application without context binding, use [partial](#partial). ``` var func = function(greeting){ return greeting + ': ' + this.name }; func = _.bind(func, {name: 'moe'}, 'hi'); func(); => 'hi: moe' ``` **bindAll**`_.bindAll(object, *methodNames)` Binds a number of methods on the **object**, specified by **methodNames**, to be run in the context of that object whenever they are invoked. Very handy for binding functions that are going to be used as event handlers, which would otherwise be invoked with a fairly useless *this*. **methodNames** are required. ``` var buttonView = { label : 'underscore', onClick: function(){ alert('clicked: ' + this.label); }, onHover: function(){ console.log('hovering: ' + this.label); } }; _.bindAll(buttonView, 'onClick', 'onHover'); // When the button is clicked, this.label will have the correct value. jQuery('#underscore_button').on('click', buttonView.onClick); ``` **partial**`_.partial(function, *arguments)` Partially apply a function by filling in any number of its **arguments**, *without* changing its dynamic this value. A close cousin of [bind](#bind). You may pass \_ in your list of **arguments** to specify an argument that should not be pre-filled, but left open to supply at call-time. ``` var subtract = function(a, b) { return b - a; }; sub5 = _.partial(subtract, 5); sub5(20); => 15 // Using a placeholder subFrom20 = _.partial(subtract, _, 20); subFrom20(5); => 15 ``` **memoize**`_.memoize(function, [hashFunction])` Memoizes a given **function** by caching the computed result. Useful for speeding up slow-running computations. If passed an optional **hashFunction**, it will be used to compute the hash key for storing the result, based on the arguments to the original function. The default **hashFunction** just uses the first argument to the memoized function as the key. The cache of memoized values is available as the cache property on the returned function. ``` var fibonacci = _.memoize(function(n) { return n < 2 ? n: fibonacci(n - 1) + fibonacci(n - 2); }); ``` **delay**`_.delay(function, wait, *arguments)` Much like **setTimeout**, invokes **function** after **wait** milliseconds. If you pass the optional **arguments**, they will be forwarded on to the **function** when it is invoked. ``` var log = _.bind(console.log, console); _.delay(log, 1000, 'logged later'); => 'logged later' // Appears after one second. ``` **defer**`_.defer(function, *arguments)` Defers invoking the **function** until the current call stack has cleared, similar to using **setTimeout** with a delay of 0. Useful for performing expensive computations or HTML rendering in chunks without blocking the UI thread from updating. If you pass the optional **arguments**, they will be forwarded on to the **function** when it is invoked. ``` _.defer(function(){ alert('deferred'); }); // Returns from the function before the alert runs. ``` **throttle**`_.throttle(function, wait, [options])` Creates and returns a new, throttled version of the passed function, that, when invoked repeatedly, will only actually call the original function at most once per every **wait** milliseconds. Useful for rate-limiting events that occur faster than you can keep up with. By default, **throttle** will execute the function as soon as you call it for the first time, and, if you call it again any number of times during the **wait** period, as soon as that period is over. If you'd like to disable the leading-edge call, pass {leading: false}, and if you'd like to disable the execution on the trailing-edge, pass {trailing: false}. ``` var throttled = _.throttle(updatePosition, 100); $(window).scroll(throttled); ``` If you need to cancel a scheduled throttle, you can call .cancel() on the throttled function. **debounce**`_.debounce(function, wait, [immediate])` Creates and returns a new debounced version of the passed function which will postpone its execution until after **wait** milliseconds have elapsed since the last time it was invoked. Useful for implementing behavior that should only happen *after* the input has stopped arriving. For example: rendering a preview of a Markdown comment, recalculating a layout after the window has stopped being resized, and so on. At the end of the **wait** interval, the function will be called with the arguments that were passed *most recently* to the debounced function. Pass true for the **immediate** argument to cause **debounce** to trigger the function on the leading instead of the trailing edge of the **wait** interval. Useful in circumstances like preventing accidental double-clicks on a "submit" button from firing a second time. ``` var lazyLayout = _.debounce(calculateLayout, 300); $(window).resize(lazyLayout); ``` If you need to cancel a scheduled debounce, you can call .cancel() on the debounced function. **once**`_.once(function)` Creates a version of the function that can only be called one time. Repeated calls to the modified function will have no effect, returning the value from the original call. Useful for initialization functions, instead of having to set a boolean flag and then check it later. ``` var initialize = _.once(createApplication); initialize(); initialize(); // Application is only created once. ``` **after**`_.after(count, function)` Creates a wrapper of **function** that does nothing at first. From the **count**-th call onwards, it starts actually calling **function**. Useful for grouping asynchronous responses, where you want to be sure that all the async calls have finished, before proceeding. ``` var renderNotes = _.after(notes.length, render); _.each(notes, function(note) { note.asyncSave({success: renderNotes}); }); // renderNotes is run once, after all notes have saved. ``` **before**`_.before(count, function)` Creates a wrapper of **function** that memoizes its return value. From the **count**-th call onwards, the memoized result of the last invocation is returned immediately instead of invoking **function** again. So the wrapper will invoke **function** at most **count** - 1 times. ``` var monthlyMeeting = _.before(3, askForRaise); monthlyMeeting(); monthlyMeeting(); monthlyMeeting(); // the result of any subsequent calls is the same as the second call ``` **wrap**`_.wrap(function, wrapper)` Wraps the first **function** inside of the **wrapper** function, passing it as the first argument. This allows the **wrapper** to execute code before and after the **function** runs, adjust the arguments, and execute it conditionally. ``` var hello = function(name) { return "hello: " + name; }; hello = _.wrap(hello, function(func) { return "before, " + func("moe") + ", after"; }); hello(); => 'before, hello: moe, after' ``` **negate**`_.negate(predicate)` Returns a new negated version of the [**predicate**](#iteratee) function. ``` var isFalsy = _.negate(Boolean); _.find([-2, -1, 0, 1, 2], isFalsy); => 0 ``` **compose**`_.compose(*functions)` Returns the composition of a list of **functions**, where each function consumes the return value of the function that follows. In math terms, composing the functions *f()*, *g()*, and *h()* produces *f(g(h()))*. ``` var greet = function(name){ return "hi: " + name; }; var exclaim = function(statement){ return statement.toUpperCase() + "!"; }; var welcome = _.compose(greet, exclaim); welcome('moe'); => 'hi: MOE!' ``` **restArguments**`_.restArguments(function, [startIndex])` Returns a version of the **function** that, when called, receives all arguments from and beyond **startIndex** collected into a single array. If you don’t pass an explicit **startIndex**, it will be determined by looking at the number of arguments to the **function** itself. Similar to ES6’s [rest parameters syntax](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/rest_parameters). ``` var raceResults = _.restArguments(function(gold, silver, bronze, everyoneElse) { _.each(everyoneElse, sendConsolations); }); raceResults("Dopey", "Grumpy", "Happy", "Sneezy", "Bashful", "Sleepy", "Doc"); ``` Object Functions ---------------- **keys**`_.keys(object)` Retrieve all the names of the **object**'s own enumerable properties. ``` _.keys({one: 1, two: 2, three: 3}); => ["one", "two", "three"] ``` **allKeys**`_.allKeys(object)` Retrieve *all* the names of **object**'s own and inherited properties. ``` function Stooge(name) { this.name = name; } Stooge.prototype.silly = true; _.allKeys(new Stooge("Moe")); => ["name", "silly"] ``` **values**`_.values(object)` Return all of the values of the **object**'s own properties. ``` _.values({one: 1, two: 2, three: 3}); => [1, 2, 3] ``` **mapObject**`_.mapObject(object, iteratee, [context])` Like [map](#map), but for objects. Transform the value of each property in turn. ``` _.mapObject({start: 5, end: 12}, function(val, key) { return val + 5; }); => {start: 10, end: 17} ``` **pairs**`_.pairs(object)` Convert an object into a list of [key, value] pairs. The opposite of [object](#object). ``` _.pairs({one: 1, two: 2, three: 3}); => [["one", 1], ["two", 2], ["three", 3]] ``` **invert**`_.invert(object)` Returns a copy of the **object** where the keys have become the values and the values the keys. For this to work, all of your object's values should be unique and string serializable. ``` _.invert({Moe: "Moses", Larry: "Louis", Curly: "Jerome"}); => {Moses: "Moe", Louis: "Larry", Jerome: "Curly"}; ``` **create**`_.create(prototype, props)` Creates a new object with the given prototype, optionally attaching **props** as *own* properties. Basically, Object.create, but without all of the property descriptor jazz. ``` var moe = _.create(Stooge.prototype, {name: "Moe"}); ``` **functions**`_.functions(object)` Alias: **methods** Returns a sorted list of the names of every method in an object — that is to say, the name of every function property of the object. ``` _.functions(_); => ["all", "any", "bind", "bindAll", "clone", "compact", "compose" ... ``` **findKey**`_.findKey(object, predicate, [context])` Similar to [\_.findIndex](#findIndex) but for keys in objects. Returns the *key* where the **predicate** truth test passes or *undefined*. **predicate** is transformed through [**iteratee**](#iteratee) to facilitate shorthand syntaxes. **extend**`_.extend(destination, *sources)` Shallowly copy all of the properties **in** the **source** objects over to the **destination** object, and return the **destination** object. Any nested objects or arrays will be copied by reference, not duplicated. It's in-order, so the last source will override properties of the same name in previous arguments. ``` _.extend({name: 'moe'}, {age: 50}); => {name: 'moe', age: 50} ``` **extendOwn**`_.extendOwn(destination, *sources)` Alias: **assign** Like **extend**, but only copies *own* properties over to the destination object. **pick**`_.pick(object, *keys)` Return a copy of the **object**, filtered to only have values for the allowed **keys** (or array of valid keys). Alternatively accepts a predicate indicating which keys to pick. ``` _.pick({name: 'moe', age: 50, userid: 'moe1'}, 'name', 'age'); => {name: 'moe', age: 50} _.pick({name: 'moe', age: 50, userid: 'moe1'}, function(value, key, object) { return _.isNumber(value); }); => {age: 50} ``` **omit**`_.omit(object, *keys)` Return a copy of the **object**, filtered to omit the disallowed **keys** (or array of keys). Alternatively accepts a predicate indicating which keys to omit. ``` _.omit({name: 'moe', age: 50, userid: 'moe1'}, 'userid'); => {name: 'moe', age: 50} _.omit({name: 'moe', age: 50, userid: 'moe1'}, function(value, key, object) { return _.isNumber(value); }); => {name: 'moe', userid: 'moe1'} ``` **defaults**`_.defaults(object, *defaults)` Returns **object** after filling in its undefined properties with the first value present in the following list of **defaults** objects. ``` var iceCream = {flavor: "chocolate"}; _.defaults(iceCream, {flavor: "vanilla", sprinkles: "lots"}); => {flavor: "chocolate", sprinkles: "lots"} ``` **clone**`_.clone(object)` Create a shallow-copied clone of the provided *plain* **object**. Any nested objects or arrays will be copied by reference, not duplicated. ``` _.clone({name: 'moe'}); => {name: 'moe'}; ``` **tap**`_.tap(object, interceptor)` Invokes **interceptor** with the **object**, and then returns **object**. The primary purpose of this method is to "tap into" a method chain, in order to perform operations on intermediate results within the chain. ``` _.chain([1,2,3,200]) .filter(function(num) { return num % 2 == 0; }) .tap(alert) .map(function(num) { return num * num }) .value(); => // [2, 200] (alerted) => [4, 40000] ``` **toPath**`_.toPath(path)` Ensures that **path** is an array. If **path** is a string, it is wrapped in a single-element array; if it is an array already, it is returned unmodified. ``` _.toPath('key'); => ['key'] _.toPath(['a', 0, 'b']); => ['a', 0, 'b'] // (same array) ``` \_.toPath is used internally in has, get, invoke, property, propertyOf and result, as well as in [**iteratee**](#iteratee) and all functions that depend on it, in order to normalize deep property paths. You can override \_.toPath if you want to customize this behavior, for example to enable Lodash-like string path shorthands. Be advised that altering \_.toPath will unavoidably cause some keys to become unreachable; override at your own risk. ``` // Support dotted path shorthands. var originalToPath = _.toPath; _.mixin({ toPath: function(path) { return _.isString(path) ? path.split('.') : originalToPath(path); } }); _.get({a: [{b: 5}]}, 'a.0.b'); => 5 ``` **get**`_.get(object, path, [default])` Returns the specified property of **object**. **path** may be specified as a simple key, or as an array of object keys or array indexes, for deep property fetching. If the property does not exist or is undefined, the optional **default** is returned. ``` _.get({a: 10}, 'a'); => 10 _.get({a: [{b: 2}]}, ['a', 0, 'b']); => 2 _.get({a: 10}, 'b', 100); => 100 ``` **has**`_.has(object, key)` Does the object contain the given key? Identical to object.hasOwnProperty(key), but uses a safe reference to the hasOwnProperty function, in case it's been [overridden accidentally](https://www.pixelstech.net/article/1326986170-An-Object-is-not-a-Hash). ``` _.has({a: 1, b: 2, c: 3}, "b"); => true ``` **property**`_.property(path)` Returns a function that will return the specified property of any passed-in object. path may be specified as a simple key, or as an array of object keys or array indexes, for deep property fetching. ``` var stooge = {name: 'moe'}; 'moe' === _.property('name')(stooge); => true var stooges = {moe: {fears: {worst: 'Spiders'}}, curly: {fears: {worst: 'Moe'}}}; var curlysWorstFear = _.property(['curly', 'fears', 'worst']); curlysWorstFear(stooges); => 'Moe' ``` **propertyOf**`_.propertyOf(object)` Inverse of \_.property. Takes an object and returns a function which will return the value of a provided property. ``` var stooge = {name: 'moe'}; _.propertyOf(stooge)('name'); => 'moe' ``` **matcher**`_.matcher(attrs)` Alias: **matches** Returns a predicate function that will tell you if a passed in object contains all of the key/value properties present in **attrs**. ``` var ready = _.matcher({selected: true, visible: true}); var readyToGoList = _.filter(list, ready); ``` **isEqual**`_.isEqual(object, other)` Performs an optimized deep comparison between the two objects, to determine if they should be considered equal. ``` var stooge = {name: 'moe', luckyNumbers: [13, 27, 34]}; var clone = {name: 'moe', luckyNumbers: [13, 27, 34]}; stooge == clone; => false _.isEqual(stooge, clone); => true ``` **isMatch**`_.isMatch(object, properties)` Tells you if the keys and values in **properties** are contained in **object**. ``` var stooge = {name: 'moe', age: 32}; _.isMatch(stooge, {age: 32}); => true ``` **isEmpty**`_.isEmpty(collection)` Returns *true* if **collection** has no elements. For strings and array-like objects \_.isEmpty checks if the length property is 0. For other objects, it returns *true* if the object has no enumerable own-properties. Note that primitive numbers, booleans and symbols are always empty by this definition. ``` _.isEmpty([1, 2, 3]); => false _.isEmpty({}); => true ``` **isElement**`_.isElement(object)` Returns *true* if **object** is a DOM element. ``` _.isElement(jQuery('body')[0]); => true ``` **isArray**`_.isArray(object)` Returns *true* if **object** is an Array. ``` (function(){ return _.isArray(arguments); })(); => false _.isArray([1,2,3]); => true ``` **isObject**`_.isObject(value)` Returns *true* if **value** is an Object. Note that JavaScript arrays and functions are objects, while (normal) strings and numbers are not. ``` _.isObject({}); => true _.isObject(1); => false ``` **isArguments**`_.isArguments(object)` Returns *true* if **object** is an Arguments object. ``` (function(){ return _.isArguments(arguments); })(1, 2, 3); => true _.isArguments([1,2,3]); => false ``` **isFunction**`_.isFunction(object)` Returns *true* if **object** is a Function. ``` _.isFunction(alert); => true ``` **isString**`_.isString(object)` Returns *true* if **object** is a String. ``` _.isString("moe"); => true ``` **isNumber**`_.isNumber(object)` Returns *true* if **object** is a Number (including NaN). ``` _.isNumber(8.4 * 5); => true ``` **isFinite**`_.isFinite(object)` Returns *true* if **object** is a finite Number. ``` _.isFinite(-101); => true _.isFinite(-Infinity); => false ``` **isBoolean**`_.isBoolean(object)` Returns *true* if **object** is either *true* or *false*. ``` _.isBoolean(null); => false ``` **isDate**`_.isDate(object)` Returns *true* if **object** is a Date. ``` _.isDate(new Date()); => true ``` **isRegExp**`_.isRegExp(object)` Returns *true* if **object** is a RegExp. ``` _.isRegExp(/moe/); => true ``` **isError**`_.isError(object)` Returns *true* if **object** inherits from an Error. ``` try { throw new TypeError("Example"); } catch (o_O) { _.isError(o_O); } => true ``` **isSymbol**`_.isSymbol(object)` Returns *true* if **object** is a [Symbol](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol). ``` _.isSymbol(Symbol()); => true ``` **isMap**`_.isMap(object)` Returns *true* if **object** is a [Map](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map). ``` _.isMap(new Map()); => true ``` **isWeakMap**`_.isWeakMap(object)` Returns *true* if **object** is a [WeakMap](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WeakMap). ``` _.isWeakMap(new WeakMap()); => true ``` **isSet**`_.isSet(object)` Returns *true* if **object** is a [Set](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set). ``` _.isSet(new Set()); => true ``` **isWeakSet**`_.isWeakSet(object)` Returns *true* if **object** is a [WeakSet](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WeakSet). ``` _.isWeakSet(WeakSet()); => true ``` **isArrayBuffer**`_.isArrayBuffer(object)` Returns *true* if **object** is an [ArrayBuffer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer). ``` _.isArrayBuffer(new ArrayBuffer(8)); => true ``` **isDataView**`_.isDataView(object)` Returns *true* if **object** is a [DataView](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/DataView). ``` _.isDataView(new DataView(new ArrayBuffer(8))); => true ``` **isTypedArray**`_.isTypedArray(object)` Returns *true* if **object** is a [TypedArray](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/TypedArray). ``` _.isTypedArray(new Int8Array(8)); => true ``` **isNaN**`_.isNaN(object)` Returns *true* if **object** is *NaN*. Note: this is not the same as the native **isNaN** function, which will also return true for many other not-number values, such as undefined. ``` _.isNaN(NaN); => true isNaN(undefined); => true _.isNaN(undefined); => false ``` **isNull**`_.isNull(object)` Returns *true* if the value of **object** is *null*. ``` _.isNull(null); => true _.isNull(undefined); => false ``` **isUndefined**`_.isUndefined(value)` Returns *true* if **value** is *undefined*. ``` _.isUndefined(window.missingVariable); => true ``` Utility Functions ----------------- **noConflict**`_.noConflict()` Give control of the global \_ variable back to its previous owner. Returns a reference to the **Underscore** object. ``` var underscore = _.noConflict(); ``` The \_.noConflict function is not present if you use the EcmaScript 6, AMD or CommonJS module system to import Underscore. **identity**`_.identity(value)` Returns the same value that is used as the argument. In math: f(x) = x This function looks useless, but is used throughout Underscore as a default iteratee. ``` var stooge = {name: 'moe'}; stooge === _.identity(stooge); => true ``` **constant**`_.constant(value)` Creates a function that returns the same value that is used as the argument of \_.constant. ``` var stooge = {name: 'moe'}; stooge === _.constant(stooge)(); => true ``` **noop**`_.noop()` Returns undefined irrespective of the arguments passed to it. Useful as the default for optional callback arguments. ``` obj.initialize = _.noop; ``` **times**`_.times(n, iteratee, [context])` Invokes the given iteratee function **n** times. Each invocation of [**iteratee**](#iteratee) is called with an index argument. Produces an array of the returned values. ``` _.times(3, function(n){ genie.grantWishNumber(n); }); ``` **random**`_.random(min, max)` Returns a random integer between **min** and **max**, inclusive. If you only pass one argument, it will return a number between 0 and that number. ``` _.random(0, 100); => 42 ``` **mixin**`_.mixin(object)` Allows you to extend Underscore with your own utility functions. Pass a hash of {name: function} definitions to have your functions added to the Underscore object, as well as the OOP wrapper. Returns the Underscore object to facilitate chaining. ``` _.mixin({ capitalize: function(string) { return string.charAt(0).toUpperCase() + string.substring(1).toLowerCase(); } }); _("fabio").capitalize(); => "Fabio" ``` **iteratee**`_.iteratee(value, [context])` Generates a callback that can be applied to each element in a collection. \_.iteratee supports a number of shorthand syntaxes for common callback use cases. Depending upon value's type, \_.iteratee will return: ``` // No value _.iteratee(); => _.identity() // Function _.iteratee(function(n) { return n * 2; }); => function(n) { return n * 2; } // Object _.iteratee({firstName: 'Chelsea'}); => _.matcher({firstName: 'Chelsea'}); // Anything else _.iteratee('firstName'); => _.property('firstName'); ``` The following Underscore methods transform their predicates through \_.iteratee: countBy, every, filter, find, findIndex, findKey, findLastIndex, groupBy, indexBy, map, mapObject, max, min, partition, reject, some, sortBy, sortedIndex, and uniq You may overwrite \_.iteratee with your own custom function, if you want additional or different shorthand syntaxes: ``` // Support `RegExp` predicate shorthand. var builtinIteratee = _.iteratee; _.iteratee = function(value, context) { if (_.isRegExp(value)) return function(obj) { return value.test(obj) }; return builtinIteratee(value, context); }; ``` **uniqueId**`_.uniqueId([prefix])` Generate a globally-unique id for client-side models or DOM elements that need one. If **prefix** is passed, the id will be appended to it. ``` _.uniqueId('contact_'); => 'contact_104' ``` **escape**`_.escape(string)` Escapes a string for insertion into HTML, replacing &, <, >, ", `, and ' characters. ``` _.escape('Curly, Larry & Moe'); => "Curly, Larry &amp; Moe" ``` **unescape**`_.unescape(string)` The opposite of [**escape**](#escape), replaces &amp;, &lt;, &gt;, &quot;, &#96; and &#x27; with their unescaped counterparts. ``` _.unescape('Curly, Larry &amp; Moe'); => "Curly, Larry & Moe" ``` **result**`_.result(object, property, [defaultValue])` If the value of the named **property** is a function then invoke it with the **object** as context; otherwise, return it. If a default value is provided and the property doesn't exist or is undefined then the default will be returned. If defaultValue is a function its result will be returned. ``` var object = {cheese: 'crumpets', stuff: function(){ return 'nonsense'; }}; _.result(object, 'cheese'); => "crumpets" _.result(object, 'stuff'); => "nonsense" _.result(object, 'meat', 'ham'); => "ham" ``` **now**`_.now()` Returns an integer timestamp for the current time, using the fastest method available in the runtime. Useful for implementing timing/animation functions. ``` _.now(); => 1392066795351 ``` **template**`_.template(templateString, [settings])` Compiles JavaScript templates into functions that can be evaluated for rendering. Useful for rendering complicated bits of HTML from JSON data sources. Template functions can both interpolate values, using <%= … %>, as well as execute arbitrary JavaScript code, with <% … %>. If you wish to interpolate a value, and have it be HTML-escaped, use <%- … %>. When you evaluate a template function, pass in a **data** object that has properties corresponding to the template's free variables. The **settings** argument should be a hash containing any \_.templateSettings that should be overridden. ``` var compiled = _.template("hello: <%= name %>"); compiled({name: 'moe'}); => "hello: moe" var template = _.template("<b><%- value %></b>"); template({value: '<script>'}); => "<b>&lt;script&gt;</b>" ``` You can also use print from within JavaScript code. This is sometimes more convenient than using <%= ... %>. ``` var compiled = _.template("<% print('Hello ' + epithet); %>"); compiled({epithet: "stooge"}); => "Hello stooge" ``` If ERB-style delimiters aren't your cup of tea, you can change Underscore's template settings to use different symbols to set off interpolated code. Define an **interpolate** regex to match expressions that should be interpolated verbatim, an **escape** regex to match expressions that should be inserted after being HTML-escaped, and an **evaluate** regex to match expressions that should be evaluated without insertion into the resulting string. Note that if part of your template matches more than one of these regexes, the first will be applied by the following order of priority: (1) **escape**, (2) **interpolate**, (3) **evaluate**. You may define or omit any combination of the three. For example, to perform [Mustache.js](https://github.com/janl/mustache.js#readme)-style templating: ``` _.templateSettings = { interpolate: /\{\{(.+?)\}\}/g }; var template = _.template("Hello {{ name }}!"); template({name: "Mustache"}); => "Hello Mustache!" ``` By default, **template** places the values from your data in the local scope via the with statement. However, you can specify a single variable name with the **variable** setting. This can significantly improve the speed at which a template is able to render. ``` _.template("Using 'with': ", {variable: 'data'})({answer: 'no'}); => "Using 'with': no" ``` Precompiling your templates can be a big help when debugging errors you can't reproduce. This is because precompiled templates can provide line numbers and a stack trace, something that is not possible when compiling templates on the client. The **source** property is available on the compiled template function for easy precompilation. ``` <script> JST.project = ; </script> ``` Object-Oriented Style --------------------- You can use Underscore in either an object-oriented or a functional style, depending on your preference. The following two lines of code are identical ways to double a list of numbers. ``` _.map([1, 2, 3], function(n){ return n * 2; }); _([1, 2, 3]).map(function(n){ return n * 2; }); ``` Chaining -------- Calling chain will cause all future method calls to return wrapped objects. When you've finished the computation, call value to retrieve the final value. Here's an example of chaining together a **map/flatten/reduce**, in order to get the word count of every word in a song. ``` var lyrics = [ {line: 1, words: "I'm a lumberjack and I'm okay"}, {line: 2, words: "I sleep all night and I work all day"}, {line: 3, words: "He's a lumberjack and he's okay"}, {line: 4, words: "He sleeps all night and he works all day"} ]; _.chain(lyrics) .map(function(line) { return line.words.split(' '); }) .flatten() .reduce(function(counts, word) { counts[word] = (counts[word] || 0) + 1; return counts; }, {}) .value(); => {lumberjack: 2, all: 4, night: 2 ... } ``` In addition, the [Array prototype's methods](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Array/prototype) are proxied through the chained Underscore object, so you can slip a reverse or a push into your chain, and continue to modify the array. **chain**`_.chain(obj)` Returns a wrapped object. Calling methods on this object will continue to return wrapped objects until value is called. ``` var stooges = [{name: 'curly', age: 25}, {name: 'moe', age: 21}, {name: 'larry', age: 23}]; var youngest = _.chain(stooges) .sortBy(function(stooge){ return stooge.age; }) .map(function(stooge){ return stooge.name + ' is ' + stooge.age; }) .first() .value(); => "moe is 21" ``` **value**`_.chain(obj).value()` Extracts the value of a wrapped object. ``` _.chain([1, 2, 3]).reverse().value(); => [3, 2, 1] ```
programming_docs
tcl_tk Tcl/Tk Documentation Tcl/Tk Documentation ==================== [Tcl/Tk Applications](usercmd/contents.htm) The interpreters which implement Tcl and Tk. [Tcl Commands](tclcmd/contents.htm "version 8.6.6") The commands which the **tclsh** interpreter implements. [Tk Commands](tkcmd/contents.htm "version 8.6.6") The additional commands which the **wish** interpreter implements. [[incr Tcl] Package Commands](itclcmd/contents.htm "version 4.0.5") The additional commands provided by the [incr Tcl] package. [SQLite3 Package Commands](sqlitecmd/contents.htm "version 3.13.0") The additional commands provided by the SQLite3 package. [TDBC Package Commands](tdbccmd/contents.htm "version 1.0.4") The additional commands provided by the TDBC package. [tdbc::mysql Package Commands](tdbcmysqlcmd/contents.htm "version 1.0.4") The additional commands provided by the tdbc::mysql package. [tdbc::odbc Package Commands](tdbcodbccmd/contents.htm "version 1.0.4") The additional commands provided by the tdbc::odbc package. [tdbc::postgres Package Commands](tdbcpostgrescmd/contents.htm "version 1.0.4") The additional commands provided by the tdbc::postgres package. [tdbc::sqlite3 Package Commands](tdbcsqlitecmd/contents.htm "version 1.0.4") The additional commands provided by the tdbc::sqlite3 package. [Thread Package Commands](threadcmd/contents.htm "version 2.8.0") The additional commands provided by the Thread package. [Tcl C API](https://www.tcl.tk/man/tcl/TclLib/contents.htm "version 8.6.6") The C functions which a Tcl extended C program may use. [Tk C API](https://www.tcl.tk/man/tcl/TkLib/contents.htm "version 8.6.6") The additional C functions which a Tk extended C program may use. [[incr Tcl] Package C API](https://www.tcl.tk/man/tcl/ItclLib/contents.htm "version 4.0.5") The additional C functions provided by the [incr Tcl] package. [TDBC Package C API](https://www.tcl.tk/man/tcl/TdbcLib/contents.htm "version 1.0.4") The additional C functions provided by the TDBC package. [Keywords](https://www.tcl.tk/man/tcl/Keywords/contents.htm) The keywords from the Tcl/Tk man pages. Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/contents.htm> tcl_tk SqliteCmd SqliteCmd ========= | | | --- | | [sqlite3](sqlite3.htm "An interface to the SQLite3 database engine") | tcl_tk sqlite3 sqlite3 ======= Name ---- sqlite3 — an interface to the SQLite3 database engine Synopsis -------- **sqlite3** *command\_name ?filename?* Description ----------- SQLite3 is a self-contains, zero-configuration, transactional SQL database engine. This extension provides an easy to use interface for accessing SQLite database files from Tcl. For full documentation see *<http://www.sqlite.org/>* and in particular *<http://www.sqlite.org/tclsqlite.html>*. tcl_tk ThreadCmd ThreadCmd ========= | | | | | | --- | --- | --- | --- | | [thread](thread.htm "Extension for script access to Tcl threading") | [tpool](tpool.htm "Part of the Tcl threading extension implementing pools of worker threads .") | [tsv](tsv.htm "Part of the Tcl threading extension allowing script level manipulation of data shared between threads .") | [ttrace](ttrace.htm "Trace-based interpreter initialization") | tcl_tk tsv tsv === [NAME](tsv.htm#M2) tsv — Part of the Tcl threading extension allowing script level manipulation of data shared between threads . [SYNOPSIS](tsv.htm#M3) [DESCRIPTION](tsv.htm#M4) [ELEMENT COMMANDS](tsv.htm#M5) [**tsv::names** ?pattern?](tsv.htm#M6) [**tsv::object** *varname* *element*](tsv.htm#M7) [**tsv::set** *varname* *element* ?value?](tsv.htm#M8) [**tsv::get** *varname* *element* ?namedvar?](tsv.htm#M9) [**tsv::unset** *varname* ?element?](tsv.htm#M10) [**tsv::exists** *varname* *element*](tsv.htm#M11) [**tsv::pop** *varname* *element*](tsv.htm#M12) [**tsv::move** *varname* *oldname* *newname*](tsv.htm#M13) [**tsv::incr** *varname* *element* ?count?](tsv.htm#M14) [**tsv::append** *varname* *element* *value* ?value . . .?](tsv.htm#M15) [**tsv::lock** *varname* *arg* ?arg . . .?](tsv.htm#M16) [**tsv::handlers**](tsv.htm#M17) [LIST COMMANDS](tsv.htm#M18) [**tsv::lappend** *varname* *element* *value* ?value . . .?](tsv.htm#M19) [**tsv::linsert** *varname* *element* *index* *value* ?value . . .?](tsv.htm#M20) [**tsv::lreplace** *varname* *element* *first* *last* ?value . . .?](tsv.htm#M21) [**tsv::llength** *varname* *element*](tsv.htm#M22) [**tsv::lindex** *varname* *element* ?index?](tsv.htm#M23) [**tsv::lrange** *varname* *element* *from* *to*](tsv.htm#M24) [**tsv::lsearch** *varname* *element* ?options? *pattern*](tsv.htm#M25) [**tsv::lset** *varname* *element* *index* ?index . . .? *value*](tsv.htm#M26) [**tsv::lpop** *varname* *element* ?index?](tsv.htm#M27) [**tsv::lpush** *varname* *element* ?index?](tsv.htm#M28) [ARRAY COMMANDS](tsv.htm#M29) [**tsv::array set** *varname* *list*](tsv.htm#M30) [**tsv::array get** *varname* ?pattern?](tsv.htm#M31) [**tsv::array names** *varname* ?pattern?](tsv.htm#M32) [**tsv::array size** *varname*](tsv.htm#M33) [**tsv::array reset** *varname* *list*](tsv.htm#M34) [**tsv::array bind** *varname* *handle*](tsv.htm#M35) [**tsv::array unbind** *varname*](tsv.htm#M36) [**tsv::array isbound** *varname*](tsv.htm#M37) [KEYED LIST COMMANDS](tsv.htm#M38) [**tsv::keyldel** *varname* *keylist* *key*](tsv.htm#M39) [**tsv::keylget** *varname* *keylist* *key* ?retvar?](tsv.htm#M40) [**tsv::keylkeys** *varname* *keylist* ?key?](tsv.htm#M41) [**tsv::keylset** *varname* *keylist* *key* *value* ?key value . .?](tsv.htm#M42) [DISCUSSION](tsv.htm#M43) [CREDITS](tsv.htm#M44) [SEE ALSO](tsv.htm#M45) [KEYWORDS](tsv.htm#M46) Name ---- tsv — Part of the Tcl threading extension allowing script level manipulation of data shared between threads . Synopsis -------- package require **Tcl 8 .4** package require **Thread ?2 .8?** **tsv::names** ?pattern? **tsv::object** *varname* *element* **tsv::set** *varname* *element* ?value? **tsv::get** *varname* *element* ?namedvar? **tsv::unset** *varname* ?element? **tsv::exists** *varname* *element* **tsv::pop** *varname* *element* **tsv::move** *varname* *oldname* *newname* **tsv::incr** *varname* *element* ?count? **tsv::append** *varname* *element* *value* ?value . . .? **tsv::lock** *varname* *arg* ?arg . . .? **tsv::handlers** **tsv::lappend** *varname* *element* *value* ?value . . .? **tsv::linsert** *varname* *element* *index* *value* ?value . . .? **tsv::lreplace** *varname* *element* *first* *last* ?value . . .? **tsv::llength** *varname* *element* **tsv::lindex** *varname* *element* ?index? **tsv::lrange** *varname* *element* *from* *to* **tsv::lsearch** *varname* *element* ?options? *pattern* **tsv::lset** *varname* *element* *index* ?index . . .? *value* **tsv::lpop** *varname* *element* ?index? **tsv::lpush** *varname* *element* ?index? **tsv::array set** *varname* *list* **tsv::array get** *varname* ?pattern? **tsv::array names** *varname* ?pattern? **tsv::array size** *varname* **tsv::array reset** *varname* *list* **tsv::array bind** *varname* *handle* **tsv::array unbind** *varname* **tsv::array isbound** *varname* **tsv::keyldel** *varname* *keylist* *key* **tsv::keylget** *varname* *keylist* *key* ?retvar? **tsv::keylkeys** *varname* *keylist* ?key? **tsv::keylset** *varname* *keylist* *key* *value* ?key value . .? Description ----------- This section describes commands implementing thread shared variables . A thread shared variable is very similar to a Tcl array but in contrast to a Tcl array it is created in shared memory and can be accessed from many threads at the same time . Important feature of thread shared variable is that each access to the variable is internaly protected by a mutex so script programmer does not have to take care about locking the variable himself . Thread shared variables are not bound to any thread explicitly . That means that when a thread which created any of thread shared variables exits, the variable and associated memory is not unset/reclaimed . User has to explicitly unset the variable to reclaim the memory consumed by the variable . Element commands ---------------- **tsv::names** ?pattern? Returns names of shared variables matching optional ?pattern? or all known variables if pattern is ommited . **tsv::object** *varname* *element* Creates object accessor command for the *element* in the shared variable *varname* . Using this command, one can apply most of the other shared variable commands as method functions of the element object command . The object command is automatically deleted when the element which this command is pointing to is unset . ``` % tsv::set foo bar "A shared string" % set string [tsv::object foo bar] % $string append " appended" => A shared string appended ``` **tsv::set** *varname* *element* ?value? Sets the value of the *element* in the shared variable *varname* to *value* and returns the value to caller . The *value* may be ommited, in which case the command will return the current value of the element . If the element cannot be found, error is triggered . **tsv::get** *varname* *element* ?namedvar? Retrieves the value of the *element* from the shared variable *varname* . If the optional argument *namedvar* is given, the value is stored in the named variable . Return value of the command depends of the existence of the optional argument *namedvar* . If the argument is ommited and the requested element cannot be found in the shared array, the command triggers error . If, however, the optional argument is given on the command line, the command returns true (1) if the element is found or false (0) if the element is not found . **tsv::unset** *varname* ?element? Unsets the *element* from the shared variable *varname* . If the optional element is not given, it deletes the variable . **tsv::exists** *varname* *element* Checks wether the *element* exists in the shared variable *varname* and returns true (1) if it does or false (0) if it doesn't . **tsv::pop** *varname* *element* Returns value of the *element* in the shared variable *varname* and unsets the element, all in one atomic operation . **tsv::move** *varname* *oldname* *newname* Renames the element *oldname* to the *newname* in the shared variable *varname* . This effectively performs an get/unset/set sequence of operations but all in one atomic step . **tsv::incr** *varname* *element* ?count? Similar to standard Tcl **[incr](../tclcmd/incr.htm)** command but increments the value of the *element* in shared variaboe *varname* instead of the Tcl variable . **tsv::append** *varname* *element* *value* ?value . . .? Similar to standard Tcl **[append](../tclcmd/append.htm)** command but appends one or more values to the *element* in shared variable *varname* instead of the Tcl variable . **tsv::lock** *varname* *arg* ?arg . . .? This command concatenates passed arguments and evaluates the resulting script under the internal mutex protection . During the script evaluation, the entire shared variable is locked . For shared variable commands within the script, internal locking is disabled so no deadlock can occur . It is also allowed to unset the shared variable from within the script . The shared variable is automatically created if it did not exists at the time of the first lock operation . ``` % tsv::lock foo { tsv::lappend foo bar 1 tsv::lappend foo bar 2 puts stderr [tsv::set foo bar] tsv::unset foo } ``` **tsv::handlers** Returns the names of all persistent storage handlers enabled at compile time . See **[ARRAY COMMANDS](#M29)** for details . List commands ------------- Those command are similar to the equivalently named Tcl command . The difference is that they operate on elements of shared arrays . **tsv::lappend** *varname* *element* *value* ?value . . .? Similar to standard Tcl **[lappend](../tclcmd/lappend.htm)** command but appends one or more values to the *element* in shared variable *varname* instead of the Tcl variable . **tsv::linsert** *varname* *element* *index* *value* ?value . . .? Similar to standard Tcl **[linsert](../tclcmd/linsert.htm)** command but inserts one or more values at the *index* list position in the *element* in the shared variable *varname* instead of the Tcl variable . **tsv::lreplace** *varname* *element* *first* *last* ?value . . .? Similar to standard Tcl **[lreplace](../tclcmd/lreplace.htm)** command but replaces one or more values between the *first* and *last* position in the *element* of the shared variable *varname* instead of the Tcl variable . **tsv::llength** *varname* *element* Similar to standard Tcl **[llength](../tclcmd/llength.htm)** command but returns length of the *element* in the shared variable *varname* instead of the Tcl variable . **tsv::lindex** *varname* *element* ?index? Similar to standard Tcl **[lindex](../tclcmd/lindex.htm)** command but returns the value at the *index* list position of the *element* from the shared variable *varname* instead of the Tcl variable . **tsv::lrange** *varname* *element* *from* *to* Similar to standard Tcl **[lrange](../tclcmd/lrange.htm)** command but returns values between *from* and *to* list positions from the *element* in the shared variable *varname* instead of the Tcl variable . **tsv::lsearch** *varname* *element* ?options? *pattern* Similar to standard Tcl **[lsearch](../tclcmd/lsearch.htm)** command but searches the *element* in the shared variable *varname* instead of the Tcl variable . **tsv::lset** *varname* *element* *index* ?index . . .? *value* Similar to standard Tcl **[lset](../tclcmd/lset.htm)** command but sets the *element* in the shared variable *varname* instead of the Tcl variable . **tsv::lpop** *varname* *element* ?index? Similar to the standard Tcl **[lindex](../tclcmd/lindex.htm)** command but in addition to returning, it also splices the value out of the *element* from the shared variable *varname* in one atomic operation . In contrast to the Tcl **[lindex](../tclcmd/lindex.htm)** command, this command returns no value to the caller . **tsv::lpush** *varname* *element* ?index? This command performes the opposite of the **tsv::lpop** command . As its counterpart, it returns no value to the caller . Array commands -------------- This command supports most of the options of the standard Tcl **[array](../tclcmd/array.htm)** command . In addition to those, it allows binding a shared variable to some persisten storage databases . Currently the persistent options supported are the famous GNU Gdbm and LMDB . These options have to be selected during the package compilation time . The implementation provides hooks for defining other persistency layers, if needed . **tsv::array set** *varname* *list* Does the same as standard Tcl **[array set](../tclcmd/array.htm)** . **tsv::array get** *varname* ?pattern? Does the same as standard Tcl **[array get](../tclcmd/array.htm)** . **tsv::array names** *varname* ?pattern? Does the same as standard Tcl **[array names](../tclcmd/array.htm)** . **tsv::array size** *varname* Does the same as standard Tcl **[array size](../tclcmd/array.htm)** . **tsv::array reset** *varname* *list* Does the same as standard Tcl **[array set](../tclcmd/array.htm)** but it clears the *varname* and sets new values from the list atomically . **tsv::array bind** *varname* *handle* Binds the *varname* to the persistent storage *handle* . The format of the *handle* is <handler>:<address>, where <handler> is "gdbm" for GNU Gdbm and "lmdb" for LMDB and <address> is the path to the database file . **tsv::array unbind** *varname* Unbinds the shared *array* from its bound persistent storage . **tsv::array isbound** *varname* Returns true (1) if the shared *varname* is bound to some persistent storage or zero (0) if not . Keyed list commands ------------------- Keyed list commands are borrowed from the TclX package . Keyed lists provide a structured data type built upon standard Tcl lists . This is a functionality similar to structs in the C programming language . A keyed list is a list in which each element contains a key and value pair . These element pairs are stored as lists themselves, where the key is the first element of the list, and the value is the second . The key-value pairs are referred to as fields . This is an example of a keyed list: ``` {{NAME {Frank Zappa}} {JOB {musician and composer}}} ``` Fields may contain subfields; ` .' is the separator character . Subfields are actually fields where the value is another keyed list . Thus the following list has the top level fields ID and NAME, and subfields NAME .FIRST and NAME .LAST: ``` {ID 106} {NAME {{FIRST Frank} {LAST Zappa}}} ``` There is no limit to the recursive depth of subfields, allowing one to build complex data structures . Keyed lists are constructed and accessed via a number of commands . All keyed list management commands take the name of the variable containing the keyed list as an argument (i .e . passed by reference), rather than passing the list directly . **tsv::keyldel** *varname* *keylist* *key* Delete the field specified by *key* from the keyed list *keylist* in the shared variable *varname* . This removes both the key and the value from the keyed list . **tsv::keylget** *varname* *keylist* *key* ?retvar? Return the value associated with *key* from the keyed list *keylist* in the shared variable *varname* . If the optional *retvar* is not specified, then the value will be returned as the result of the command . In this case, if key is not found in the list, an error will result . If *retvar* is specified and *key* is in the list, then the value is returned in the variable *retvar* and the command returns 1 if the key was present within the list . If *key* isn't in the list, the command will return 0, and *retvar* will be left unchanged . If {} is specified for *retvar*, the value is not returned, allowing the Tcl programmer to determine if a *key* is present in a keyed list without setting a variable as a side-effect . **tsv::keylkeys** *varname* *keylist* ?key? Return the a list of the keys in the keyed list *keylist* in the shared variable *varname* . If *key* is specified, then it is the name of a key field who's subfield keys are to be retrieved . **tsv::keylset** *varname* *keylist* *key* *value* ?key value . .? Set the value associated with *key*, in the keyed list *keylist* to *value* . If the *keylist* does not exists, it is created . If *key* is not currently in the list, it will be added . If it already exists, *value* replaces the existing value . Multiple keywords and values may be specified, if desired . Discussion ---------- The current implementation of thread shared variables allows for easy and convenient access to data shared between different threads . Internally, the data is stored in Tcl objects and all package commands operate on internal data representation, thus minimizing shimmering and improving performance . Special care has been taken to assure that all object data is properly locked and deep-copied when moving objects between threads . Due to the internal design of the Tcl core, there is no provision of full integration of shared variables within the Tcl syntax, unfortunately . All access to shared data must be performed with the supplied package commands . Also, variable traces are not supported . But even so, benefits of easy, simple and safe shared data manipulation outweights imposed limitations . Credits ------- Thread shared variables are inspired by the nsv interface found in AOLserver, a highly scalable Web server from America Online . See also -------- **[thread](thread.htm)**, **[tpool](tpool.htm)**, **[ttrace](ttrace.htm)**
programming_docs
tcl_tk thread thread ====== [NAME](thread.htm#M2) thread — Extension for script access to Tcl threading [SYNOPSIS](thread.htm#M3) [DESCRIPTION](thread.htm#M4) [COMMANDS](thread.htm#M5) [**thread::create** ?-joinable? ?-preserved? ?script?](thread.htm#M6) [**thread::preserve** ?id?](thread.htm#M8) [**thread::release** ?-wait? ?id?](thread.htm#M9) [**thread::id**](thread.htm#M10) [**thread::errorproc** ?procname?](thread.htm#M11) [**thread::cancel** ?-unwind? *id* ?result?](thread.htm#M12) [**thread::unwind**](thread.htm#M13) [**thread::exit** ?status?](thread.htm#M14) [**thread::names**](thread.htm#M15) [**thread::exists** *id*](thread.htm#M16) [**thread::send** ?-async? ?-head? *id* *script* ?varname?](thread.htm#M17) [**thread::broadcast** *script*](thread.htm#M19) [**thread::wait**](thread.htm#M20) [**thread::eval** ?-lock mutex? *arg* ?arg . . .?](thread.htm#M21) [**thread::join** *id*](thread.htm#M22) [**thread::configure** *id* ?option? ?value? ? . . .?](thread.htm#M23) [**thread::transfer** *id* *channel*](thread.htm#M24) [**thread::detach** *channel*](thread.htm#M25) [**thread::attach** *channel*](thread.htm#M26) [**thread::mutex**](thread.htm#M27) [**thread::mutex** **create** ?-recursive?](thread.htm#M28) [**thread::mutex** **destroy** *mutex*](thread.htm#M29) [**thread::mutex** **lock** *mutex*](thread.htm#M30) [**thread::mutex** **unlock** *mutex*](thread.htm#M31) [**thread::rwmutex**](thread.htm#M32) [**thread::rwmutex** **create**](thread.htm#M33) [**thread::rwmutex** **destroy** *mutex*](thread.htm#M34) [**thread::rwmutex** **rlock** *mutex*](thread.htm#M35) [**thread::rwmutex** **wlock** *mutex*](thread.htm#M36) [**thread::rwmutex** **unlock** *mutex*](thread.htm#M37) [**thread::cond**](thread.htm#M38) [**thread::cond** **create**](thread.htm#M39) [**thread::cond** **destroy** *cond*](thread.htm#M40) [**thread::cond** **notify** *cond*](thread.htm#M41) [**thread::cond** **wait** *cond* *mutex* ?ms?](thread.htm#M42) [DISCUSSION](thread.htm#M44) [SEE ALSO](thread.htm#M45) [KEYWORDS](thread.htm#M46) Name ---- thread — Extension for script access to Tcl threading Synopsis -------- package require **Tcl 8 .4** package require **Thread ?2 .8?** **thread::create** ?-joinable? ?-preserved? ?script? **thread::preserve** ?id? **thread::release** ?-wait? ?id? **thread::id** **thread::errorproc** ?procname? **thread::cancel** ?-unwind? *id* ?result? **thread::unwind** **thread::exit** ?status? **thread::names** **thread::exists** *id* **thread::send** ?-async? ?-head? *id* *script* ?varname? **thread::broadcast** *script* **thread::wait** **thread::eval** ?-lock mutex? *arg* ?arg . . .? **thread::join** *id* **thread::configure** *id* ?option? ?value? ? . . .? **thread::transfer** *id* *channel* **thread::detach** *channel* **thread::attach** *channel* **thread::mutex** **thread::mutex** **create** ?-recursive? **thread::mutex** **[destroy](../tkcmd/destroy.htm)** *mutex* **thread::mutex** **lock** *mutex* **thread::mutex** **unlock** *mutex* **thread::rwmutex** **thread::rwmutex** **create** **thread::rwmutex** **[destroy](../tkcmd/destroy.htm)** *mutex* **thread::rwmutex** **rlock** *mutex* **thread::rwmutex** **wlock** *mutex* **thread::rwmutex** **unlock** *mutex* **thread::cond** **thread::cond** **create** **thread::cond** **[destroy](../tkcmd/destroy.htm)** *cond* **thread::cond** **notify** *cond* **thread::cond** **wait** *cond* *mutex* ?ms? Description ----------- The **thread** extension creates threads that contain Tcl interpreters, and it lets you send scripts to those threads for evaluation . Additionaly, it provides script-level access to basic thread synchronization primitives, like mutexes and condition variables . Commands -------- This section describes commands for creating and destroying threads and sending scripts to threads for evaluation . **thread::create** ?-joinable? ?-preserved? ?script? This command creates a thread that contains a Tcl interpreter . The Tcl interpreter either evaluates the optional **script**, if specified, or it waits in the event loop for scripts that arrive via the **thread::send** command . The result, if any, of the optional **script** is never returned to the caller . The result of **thread::create** is the ID of the thread . This is the opaque handle which identifies the newly created thread for all other package commands . The handle of the thread goes out of scope automatically when thread is marked for exit (see the **thread::release** command below) . If the optional **script** argument contains the **thread::wait** command the thread will enter into the event loop . If such command is not found in the **script** the thread will run the **script** to the end and exit . In that case, the handle may be safely ignored since it refers to a thread which does not exists any more at the time when the command returns . Using flag **-joinable** it is possible to create a joinable thread, i .e . one upon whose exit can be waited upon by using **thread::join** command . Note that failure to join a thread created with **-joinable** flag results in resource and memory leaks . Threads created by the **thread::create** cannot be destroyed forcefully . Consequently, there is no corresponding thread destroy command . A thread may only be released using the **thread::release** and if its internal reference count drops to zero, the thread is marked for exit . This kicks the thread out of the event loop servicing and the thread continues to execute commands passed in the **script** argument, following the **thread::wait** command . If this was the last command in the script, as usualy the case, the thread will exit . It is possible to create a situation in which it may be impossible to terminate the thread, for example by putting some endless loop after the **thread::wait** or entering the event loop again by doing an vwait-type of command . In such cases, the thread may never exit . This is considered to be a bad practice and should be avoided if possible . This is best illustrated by the example below: ``` # You should never do . . . set tid [thread::create { package require Http thread::wait vwait forever ; # <-- this! }] ``` The thread created in the above example will never be able to exit . After it has been released with the last matching **thread::release** call, the thread will jump out of the **thread::wait** and continue to execute commands following . It will enter **[vwait](../tclcmd/vwait.htm)** command and wait endlessly for events . There is no way one can terminate such thread, so you wouldn't want to do this! Each newly created has its internal reference counter set to 0 (zero), i .e . it is unreserved . This counter gets incremented by a call to **thread::preserve** and decremented by a call to **thread::release** command . These two commands implement simple but effective thread reservation system and offer predictable and controllable thread termination capabilities . It is however possible to create initialy preserved threads by using flag **-preserved** of the **thread::create** command . Threads created with this flag have the initial value of the reference counter of 1 (one), and are thus initially marked reserved . **thread::preserve** ?id? This command increments the thread reference counter . Each call to this command increments the reference counter by one (1) . Command returns the value of the reference counter after the increment . If called with the optional thread **id**, the command preserves the given thread . Otherwise the current thread is preserved . With reference counting, one can implement controlled access to a shared Tcl thread . By incrementing the reference counter, the caller signalizes that he/she wishes to use the thread for a longer period of time . By decrementing the counter, caller signalizes that he/she has finished using the thread . **thread::release** ?-wait? ?id? This command decrements the thread reference counter . Each call to this command decrements the reference counter by one (1) . If called with the optional thread **id**, the command releases the given thread . Otherwise, the current thread is released . Command returns the value of the reference counter after the decrement . When the reference counter reaches zero (0), the target thread is marked for termination . You should not reference the thread after the **thread::release** command returns zero or negative integer . The handle of the thread goes out of scope and should not be used any more . Any following reference to the same thread handle will result in Tcl error . Optional flag **-wait** instructs the caller thread to wait for the target thread to exit, if the effect of the command would result in termination of the target thread, i .e . if the return result would be zero (0) . Without the flag, the caller thread does not wait for the target thread to exit . Care must be taken when using the **-wait**, since this may block the caller thread indefinitely . This option has been implemented for some special uses of the extension and is deprecated for regular use . Regular users should create joinable threads by using the **-joinable** option of the **thread::create** command and the **thread::join** to wait for thread to exit . **thread::id** This command returns the ID of the current thread . **thread::errorproc** ?procname? This command sets a handler for errors that occur in scripts sent asynchronously, using the **-async** flag of the **thread::send** command, to other threads . If no handler is specified, the current handler is returned . The empty string resets the handler to default (unspecified) value . An uncaught error in a thread causes an error message to be sent to the standard error channel . This default reporting scheme can be changed by registering a procedure which is called to report the error . The *procname* is called in the interpreter that invoked the **thread::errorproc** command . The *procname* is called like this: ``` myerrorproc thread_id errorInfo ``` **thread::cancel** ?-unwind? *id* ?result? This command requires Tcl version 8 .6 or higher . Cancels the script being evaluated in the thread given by the *id* parameter . Without the **-unwind** switch the evaluation stack for the interpreter is unwound until an enclosing catch command is found or there are no further invocations of the interpreter left on the call stack . With the **-unwind** switch the evaluation stack for the interpreter is unwound without regard to any intervening catch command until there are no further invocations of the interpreter left on the call stack . If *result* is present, it will be used as the error message string; otherwise, a default error message string will be used . **thread::unwind** Use of this command is deprecated in favour of more advanced thread reservation system implemented with **thread::preserve** and **thread::release** commands . Support for **thread::unwind** command will dissapear in some future major release of the extension . This command stops a prior **thread::wait** command . Execution of the script passed to newly created thread will continue from the **thread::wait** command . If **thread::wait** was the last command in the script, the thread will exit . The command returns empty result but may trigger Tcl error with the message "target thread died" in some situations . **thread::exit** ?status? Use of this command is deprecated in favour of more advanced thread reservation system implemented with **thread::preserve** and **thread::release** commands . Support for **thread::exit** command will dissapear in some future major release of the extension . This command forces a thread stuck in the **thread::wait** command to unconditionaly exit . The thread's exit status defaults to 666 and can be specified using the optional *status* argument . The execution of **thread::exit** command is guaranteed to leave the program memory in the unconsistent state, produce memory leaks and otherwise affect other subsytem(s) of the Tcl application in an unpredictable manner . The command returns empty result but may trigger Tcl error with the message "target thread died" in some situations . **thread::names** This command returns a list of thread IDs . These are only for threads that have been created via **thread::create** command . If your application creates other threads at the C level, they are not reported by this command . **thread::exists** *id* Returns true (1) if thread given by the *id* parameter exists, false (0) otherwise . This applies only for threads that have been created via **thread::create** command . **thread::send** ?-async? ?-head? *id* *script* ?varname? This command passes a *script* to another thread and, optionally, waits for the result . If the **-async** flag is specified, the command does not wait for the result and it returns empty string . The target thread must enter it's event loop in order to receive scripts sent via this command . This is done by default for threads created without a startup script . Threads can enter the event loop explicitly by calling **thread::wait** or any other relevant Tcl/Tk command, like **[update](../tclcmd/update.htm)**, **[vwait](../tclcmd/vwait.htm)**, etc . Optional **varname** specifies name of the variable to store the result of the *script* . Without the **-async** flag, the command returns the evaluation code, similarily to the standard Tcl **[catch](../tclcmd/catch.htm)** command . If, however, the **-async** flag is specified, the command returns immediately and caller can later **[vwait](../tclcmd/vwait.htm)** on ?varname? to get the result of the passed *script* ``` set t1 [thread::create] set t2 [thread::create] thread::send -async $t1 "set a 1" result thread::send -async $t2 "set b 2" result for {set i 0} {$i < 2} {incr i} { vwait result } ``` In the above example, two threads were fed work and both of them were instructed to signalize the same variable "result" in the calling thread . The caller entered the event loop twice to get both results . Note, however, that the order of the received results may vary, depending on the current system load, type of work done, etc, etc . Many threads can simultaneously send scripts to the target thread for execution . All of them are entered into the event queue of the target thread and executed on the FIFO basis, intermingled with optional other events pending in the event queue of the target thread . Using the optional ?-head? switch, scripts posted to the thread's event queue can be placed on the head, instead on the tail of the queue, thus being executed in the LIFO fashion . **thread::broadcast** *script* This command passes a *script* to all threads created by the package for execution . It does not wait for response from any of the threads . **thread::wait** This enters the event loop so a thread can receive messages from the **thread::send** command . This command should only be used within the script passed to the **thread::create** . It should be the very last command in the script . If this is not the case, the exiting thread will continue executing the script lines past the **thread::wait** which is usually not what you want and/or expect . ``` set t1 [thread::create { # # Do some initialization work here # thread::wait ; # Enter the event loop }] ``` **thread::eval** ?-lock mutex? *arg* ?arg . . .? This command concatenates passed arguments and evaluates the resulting script under the mutex protection . If no mutex is specified by using the ?-lock mutex? optional argument, the internal static mutex is used . **thread::join** *id* This command waits for the thread with ID *id* to exit and then returns it's exit code . Errors will be returned for threads which are not joinable or already waited upon by another thread . Upon the join the handle of the thread has gone out of scope and should not be used any more . **thread::configure** *id* ?option? ?value? ? . . .? This command configures various low-level aspects of the thread with ID *id* in the similar way as the standard Tcl command **[fconfigure](../tclcmd/fconfigure.htm)** configures some Tcl channel options . Options currently supported are: **-eventmark** and **-unwindonerror** . The **-eventmark** option, when set, limits the number of asynchronously posted scripts to the thread event loop . The **thread::send -async** command will block until the number of pending scripts in the event loop does not drop below the value configured with **-eventmark** . Default value for the **-eventmark** is 0 (zero) which effectively disables the checking, i .e . allows for unlimited number of posted scripts . The **-unwindonerror** option, when set, causes the target thread to unwind if the result of the script processing resulted in error . Default value for the **-unwindonerror** is 0 (false), i .e . thread continues to process scripts after one of the posted scripts fails . **thread::transfer** *id* *channel* This moves the specified *channel* from the current thread and interpreter to the main interpreter of the thread with the given *id* . After the move the current interpreter has no access to the channel any more, but the main interpreter of the target thread will be able to use it from now on . The command waits until the other thread has incorporated the channel . Because of this it is possible to deadlock the participating threads by commanding the other through a synchronous **thread::send** to transfer a channel to us . This easily extends into longer loops of threads waiting for each other . Other restrictions: the channel in question must not be shared among multiple interpreters running in the sending thread . This automatically excludes the special channels for standard input, output and error . Due to the internal Tcl core implementation and the restriction on transferring shared channels, one has to take extra measures when transferring socket channels created by accepting the connection out of the **[socket](../tclcmd/socket.htm)** commands callback procedures: ``` socket -server _Accept 2200 proc _Accept {s ipaddr port} { after idle [list Accept $s $ipaddr $port] } proc Accept {s ipaddr port} { set tid [thread::create] thread::transfer $tid $s } ``` **thread::detach** *channel* This detaches the specified *channel* from the current thread and interpreter . After that, the current interpreter has no access to the channel any more . The channel is in the parked state until some other (or the same) thread attaches the channel again with **thread::attach** . Restrictions: same as for transferring shared channels with the **thread::transfer** command . **thread::attach** *channel* This attaches the previously detached *channel* in the current thread/interpreter . For already existing channels, the command does nothing, i .e . it is not an error to attach the same channel more than once . The first operation will actualy perform the operation, while all subsequent operation will just do nothing . Command throws error if the *channel* cannot be found in the list of detached channels and/or in the current interpreter . **thread::mutex** Mutexes are most common thread synchronization primitives . They are used to synchronize access from two or more threads to one or more shared resources . This command provides script-level access to exclusive and/or recursive mutexes . Exclusive mutexes can be locked only once by one thread, while recursive mutexes can be locked many times by the same thread . For recursive mutexes, number of lock and unlock operations must match, otherwise, the mutex will never be released, which would lead to various deadlock situations . Care has to be taken when using mutexes in an multithreading program . Improper use of mutexes may lead to various deadlock situations, especially when using exclusive mutexes . The **thread::mutex** command supports following subcommands and options: **thread::mutex** **create** ?-recursive? Creates the mutex and returns it's opaque handle . This handle should be used for any future reference to the newly created mutex . If no optional ?-recursive? argument was specified, the command creates the exclusive mutex . With the ?-recursive? argument, the command creates a recursive mutex . **thread::mutex** **destroy** *mutex* Destroys the *mutex* . Mutex should be in unlocked state before the destroy attempt . If the mutex is locked, the command will throw Tcl error . **thread::mutex** **lock** *mutex* Locks the *mutex* . Locking the exclusive mutex may throw Tcl error if on attempt to lock the same mutex twice from the same thread . If your program logic forces you to lock the same mutex twice or more from the same thread (this may happen in recursive procedure invocations) you should consider using the recursive mutexes . **thread::mutex** **unlock** *mutex* Unlocks the *mutex* so some other thread may lock it again . Attempt to unlock the already unlocked mutex will throw Tcl error . **thread::rwmutex** This command creates many-readers/single-writer mutexes . Reader/writer mutexes allow you to serialize access to a shared resource more optimally . In situations where a shared resource gets mostly read and seldom modified, you might gain some performace by using reader/writer mutexes instead of exclusive or recursive mutexes . For reading the resource, thread should obtain a read lock on the resource . Read lock is non-exclusive, meaning that more than one thread can obtain a read lock to the same resource, without waiting on other readers . For changing the resource, however, a thread must obtain a exclusive write lock . This lock effectively blocks all threads from gaining the read-lock while the resource is been modified by the writer thread . Only after the write lock has been released, the resource may be read-locked again . The **thread::rwmutex** command supports following subcommands and options: **thread::rwmutex** **create** Creates the reader/writer mutex and returns it's opaque handle . This handle should be used for any future reference to the newly created mutex . **thread::rwmutex** **destroy** *mutex* Destroys the reader/writer *mutex* . If the mutex is already locked, attempt to destroy it will throw Tcl error . **thread::rwmutex** **rlock** *mutex* Locks the *mutex* for reading . More than one thread may read-lock the same *mutex* at the same time . **thread::rwmutex** **wlock** *mutex* Locks the *mutex* for writing . Only one thread may write-lock the same *mutex* at the same time . Attempt to write-lock same *mutex* twice from the same thread will throw Tcl error . **thread::rwmutex** **unlock** *mutex* Unlocks the *mutex* so some other thread may lock it again . Attempt to unlock already unlocked *mutex* will throw Tcl error . **thread::cond** This command provides script-level access to condition variables . A condition variable creates a safe environment for the program to test some condition, sleep on it when false and be awakened when it might have become true . A condition variable is always used in the conjuction with an exclusive mutex . If you attempt to use other type of mutex in conjuction with the condition variable, a Tcl error will be thrown . The command supports following subcommands and options: **thread::cond** **create** Creates the condition variable and returns it's opaque handle . This handle should be used for any future reference to newly created condition variable . **thread::cond** **destroy** *cond* Destroys condition variable *cond* . Extreme care has to be taken that nobody is using (i .e . waiting on) the condition variable, otherwise unexpected errors may happen . **thread::cond** **notify** *cond* Wakes up all threads waiting on the condition variable *cond* . **thread::cond** **wait** *cond* *mutex* ?ms? This command is used to suspend program execution until the condition variable *cond* has been signalled or the optional timer has expired . The exclusive *mutex* must be locked by the calling thread on entrance to this command . If the mutex is not locked, Tcl error is thrown . While waiting on the *cond*, the command releases *mutex* . Before returning to the calling thread, the command re-acquires the *mutex* again . Unlocking the *mutex* and waiting on the condition variable *cond* is done atomically . The **ms** command option, if given, must be an integer specifying time interval in milliseconds the command waits to be signalled . Otherwise the command waits on condition notify forever . In multithreading programs, there are many situations where a thread has to wait for some event to happen until it is allowed to proceed . This is usually accomplished by repeatedly testing a condition under the mutex protection and waiting on the condition variable until the condition evaluates to true: ``` set mutex [thread::mutex create] set cond [thread::cond create] thread::mutex lock $mutex while {<some_condition_is_true>} { thread::cond wait $cond $mutex } # Do some work under mutex protection thread::mutex unlock $mutex ``` Repeated testing of the condition is needed since the condition variable may get signalled without the condition being actually changed (spurious thread wake-ups, for example) . Discussion ---------- The fundamental threading model in Tcl is that there can be one or more Tcl interpreters per thread, but each Tcl interpreter should only be used by a single thread which created it . A "shared memory" abstraction is awkward to provide in Tcl because Tcl makes assumptions about variable and data ownership . Therefore this extension supports a simple form of threading where the main thread can manage several background, or "worker" threads . For example, an event-driven server can pass requests to worker threads, and then await responses from worker threads or new client requests . Everything goes through the common Tcl event loop, so message passing between threads works naturally with event-driven I/O, **[vwait](../tclcmd/vwait.htm)** on variables, and so forth . For the transfer of bulk information it is possible to move channels between the threads . For advanced multithreading scripts, script-level access to two basic synchronization primitives, mutex and condition variables, is also supported . See also -------- ***<http://www> .tcl .tk/doc/howto/thread\_model .html***, **[tpool](tpool.htm)**, **[tsv](tsv.htm)**, **[ttrace](ttrace.htm)**
programming_docs
tcl_tk tpool tpool ===== [NAME](tpool.htm#M2) tpool — Part of the Tcl threading extension implementing pools of worker threads . [SYNOPSIS](tpool.htm#M3) [DESCRIPTION](tpool.htm#M4) [COMMANDS](tpool.htm#M5) [**tpool::create** ?options?](tpool.htm#M6) [**-minworkers** *number*](tpool.htm#M7) [**-maxworkers** *number*](tpool.htm#M8) [**-idletime** *seconds*](tpool.htm#M9) [**-initcmd** *script*](tpool.htm#M10) [**-exitcmd** *script*](tpool.htm#M11) [**tpool::names**](tpool.htm#M12) [**tpool::post** ?-detached? ?-nowait? *tpool* *script*](tpool.htm#M13) [**tpool::wait** *tpool* *joblist* ?varname?](tpool.htm#M14) [**tpool::cancel** *tpool* *joblist* ?varname?](tpool.htm#M15) [**tpool::get** *tpool* *job*](tpool.htm#M16) [**tpool::preserve** *tpool*](tpool.htm#M17) [**tpool::release** *tpool*](tpool.htm#M18) [**tpool::suspend** *tpool*](tpool.htm#M19) [**tpool::resume** *tpool*](tpool.htm#M20) [DISCUSSION](tpool.htm#M21) [SEE ALSO](tpool.htm#M22) [KEYWORDS](tpool.htm#M23) Name ---- tpool — Part of the Tcl threading extension implementing pools of worker threads . Synopsis -------- package require **Tcl 8 .4** package require **Thread ?2 .8?** **tpool::create** ?options? **tpool::names** **tpool::post** ?-detached? ?-nowait? *tpool* *script* **tpool::wait** *tpool* *joblist* ?varname? **tpool::cancel** *tpool* *joblist* ?varname? **tpool::get** *tpool* *job* **tpool::preserve** *tpool* **tpool::release** *tpool* **tpool::suspend** *tpool* **tpool::resume** *tpool* Description ----------- This package creates and manages pools of worker threads . It allows you to post jobs to worker threads and wait for their completion . The threadpool implementation is Tcl event-loop aware . That means that any time a caller is forced to wait for an event (job being completed or a worker thread becoming idle or initialized), the implementation will enter the event loop and allow for servicing of other pending file or timer (or any other supported) events . Commands -------- **tpool::create** ?options? This command creates new threadpool . It accepts several options as key-value pairs . Options are used to tune some threadpool parameters . The command returns the ID of the newly created threadpool . Following options are supported: **-minworkers** *number* Minimum number of worker threads needed for this threadpool instance . During threadpool creation, the implementation will create somany worker threads upfront and will keep at least number of them alive during the lifetime of the threadpool instance . Default value of this parameter is 0 (zero) . which means that a newly threadpool will have no worker threads initialy . All worker threads will be started on demand by callers running **tpool::post** command and posting jobs to the job queue . **-maxworkers** *number* Maximum number of worker threads allowed for this threadpool instance . If a new job is pending and there are no idle worker threads available, the implementation will try to create new worker thread . If the number of available worker threads is lower than the given number, new worker thread will start . The caller will automatically enter the event loop and wait until the worker thread has initialized . If . however, the number of available worker threads is equal to the given number, the caller will enter the event loop and wait for the first worker thread to get idle, thus ready to run the job . Default value of this parameter is 4 (four), which means that the threadpool instance will allow maximum of 4 worker threads running jobs or being idle waiting for new jobs to get posted to the job queue . **-idletime** *seconds* Time in seconds an idle worker thread waits for the job to get posted to the job queue . If no job arrives during this interval and the time expires, the worker thread will check the number of currently available worker threads and if the number is higher than the number set by the **minthreads** option, it will exit . If an **exitscript** has been defined, the exiting worker thread will first run the script and then exit . Errors from the exit script, if any, are ignored . The idle worker thread is not servicing the event loop . If you, however, put the worker thread into the event loop, by evaluating the **[vwait](../tclcmd/vwait.htm)** or other related Tcl commands, the worker thread will not be in the idle state, hence the idle timer will not be taken into account . Default value for this option is unspecified . **-initcmd** *script* Sets a Tcl script used to initialize new worker thread . This is usually used to load packages and commands in the worker, set default variables, create namespaces, and such . If the passed script runs into a Tcl error, the worker will not be created and the initiating command (either the **tpool::create** or **tpool::post**) will throw error . Default value for this option is unspecified, hence, the Tcl interpreter of the worker thread will contain just the initial set of Tcl commands . **-exitcmd** *script* Sets a Tcl script run when the idle worker thread exits . This is normaly used to cleanup the state of the worker thread, release reserved resources, cleanup memory and such . Default value for this option is unspecified, thus no Tcl script will run on the worker thread exit . **tpool::names** This command returns a list of IDs of threadpools created with the **tpool::create** command . If no threadpools were found, the command will return empty list . **tpool::post** ?-detached? ?-nowait? *tpool* *script* This command sends a *script* to the target *tpool* threadpool for execution . The script will be executed in the first available idle worker thread . If there are no idle worker threads available, the command will create new one, enter the event loop and service events until the newly created thread is initialized . If the current number of worker threads is equal to the maximum number of worker threads, as defined during the threadpool creation, the command will enter the event loop and service events while waiting for one of the worker threads to become idle . If the optional ?-nowait? argument is given, the command will not wait for one idle worker . It will just place the job in the pool's job queue and return immediately . The command returns the ID of the posted job . This ID is used for subsequent **tpool::wait**, **tpool::get** and **tpool::cancel** commands to wait for and retrieve result of the posted script, or cancel the posted job respectively . If the optional ?-detached? argument is specified, the command will post a detached job . A detached job can not be cancelled or waited upon and is not identified by the job ID . If the threadpool *tpool* is not found in the list of active thread pools, the command will throw error . The error will also be triggered if the newly created worker thread fails to initialize . **tpool::wait** *tpool* *joblist* ?varname? This command waits for one or many jobs, whose job IDs are given in the *joblist* to get processed by the worker thread(s) . If none of the specified jobs are ready, the command will enter the event loop, service events and wait for the first job to get ready . The command returns the list of completed job IDs . If the optional variable ?varname? is given, it will be set to the list of jobs in the *joblist* which are still pending . If the threadpool *tpool* is not found in the list of active thread pools, the command will throw error . **tpool::cancel** *tpool* *joblist* ?varname? This command cancels the previously posted jobs given by the *joblist* to the pool *tpool* . Job cancellation succeeds only for job still waiting to be processed . If the job is already being executed by one of the worker threads, the job will not be cancelled . The command returns the list of cancelled job IDs . If the optional variable ?varname? is given, it will be set to the list of jobs in the *joblist* which were not cancelled . If the threadpool *tpool* is not found in the list of active thread pools, the command will throw error . **tpool::get** *tpool* *job* This command retrieves the result of the previously posted *job* . Only results of jobs waited upon with the **tpool::wait** command can be retrieved . If the execution of the script resulted in error, the command will throw the error and update the **[errorInfo](../tclcmd/tclvars.htm)** and **[errorCode](../tclcmd/tclvars.htm)** variables correspondingly . If the pool *tpool* is not found in the list of threadpools, the command will throw error . If the job *job* is not ready for retrieval, because it is currently being executed by the worker thread, the command will throw error . **tpool::preserve** *tpool* Each call to this command increments the reference counter of the threadpool *tpool* by one (1) . Command returns the value of the reference counter after the increment . By incrementing the reference counter, the caller signalizes that he/she wishes to use the resource for a longer period of time . **tpool::release** *tpool* Each call to this command decrements the reference counter of the threadpool *tpool* by one (1) .Command returns the value of the reference counter after the decrement . When the reference counter reaches zero (0), the threadpool *tpool* is marked for termination . You should not reference the threadpool after the **tpool::release** command returns zero . The *tpool* handle goes out of scope and should not be used any more . Any following reference to the same threadpool handle will result in Tcl error . **tpool::suspend** *tpool* Suspends processing work on this queue . All pool workers are paused but additional work can be added to the pool . Note that adding the additional work will not increase the number of workers dynamically as the pool processing is suspended . Number of workers is maintained to the count that was found prior suspending worker activity . If you need to assure certain number of worker threads, use the **minworkers** option of the **tpool::create** command . **tpool::resume** *tpool* Resume processing work on this queue . All paused (suspended) workers are free to get work from the pool . Note that resuming pool operation will just let already created workers to proceed . It will not create additional worker threads to handle the work posted to the pool's work queue . Discussion ---------- Threadpool is one of the most common threading paradigm when it comes to server applications handling a large number of relatively small tasks . A very simplistic model for building a server application would be to create a new thread each time a request arrives and service the request in the new thread . One of the disadvantages of this approach is that the overhead of creating a new thread for each request is significant; a server that created a new thread for each request would spend more time and consume more system resources in creating and destroying threads than in processing actual user requests . In addition to the overhead of creating and destroying threads, active threads consume system resources . Creating too many threads can cause the system to run out of memory or trash due to excessive memory consumption . A thread pool offers a solution to both the problem of thread life-cycle overhead and the problem of resource trashing . By reusing threads for multiple tasks, the thread-creation overhead is spread over many tasks . As a bonus, because the thread already exists when a request arrives, the delay introduced by thread creation is eliminated . Thus, the request can be serviced immediately . Furthermore, by properly tuning the number of threads in the thread pool, resource thrashing may also be eliminated by forcing any request to wait until a thread is available to process it . See also -------- **[thread](thread.htm)**, **[tsv](tsv.htm)**, **[ttrace](ttrace.htm)** tcl_tk ttrace ttrace ====== [NAME](ttrace.htm#M2) ttrace — Trace-based interpreter initialization [SYNOPSIS](ttrace.htm#M3) [DESCRIPTION](ttrace.htm#M4) [USER COMMANDS](ttrace.htm#M5) [**ttrace::eval** *arg* ?arg . . .?](ttrace.htm#M6) [**ttrace::enable**](ttrace.htm#M7) [**ttrace::disable**](ttrace.htm#M8) [**ttrace::cleanup**](ttrace.htm#M9) [**ttrace::update** ?epoch?](ttrace.htm#M10) [**ttrace::getscript**](ttrace.htm#M11) [CALLBACK COMMANDS](ttrace.htm#M12) [**ttrace::atenable** *cmd* *arglist* *body*](ttrace.htm#M13) [**ttrace::atdisable** *cmd* *arglist* *body*](ttrace.htm#M14) [**ttrace::addtrace** *cmd* *arglist* *body*](ttrace.htm#M15) [**ttrace::addscript** *name* *body*](ttrace.htm#M16) [**ttrace::addresolver** *cmd* *arglist* *body*](ttrace.htm#M17) [**ttrace::addcleanup** *body*](ttrace.htm#M18) [**ttrace::addentry** *cmd* *var* *val*](ttrace.htm#M19) [**ttrace::getentry** *cmd* *var*](ttrace.htm#M20) [**ttrace::getentries** *cmd* ?pattern?](ttrace.htm#M21) [**ttrace::delentry** *cmd*](ttrace.htm#M22) [**ttrace::preload** *cmd*](ttrace.htm#M23) [DISCUSSION](ttrace.htm#M24) [SEE ALSO](ttrace.htm#M25) [KEYWORDS](ttrace.htm#M26) Name ---- ttrace — Trace-based interpreter initialization Synopsis -------- package require **Tcl 8 .4** package require **Thread ?2 .8?** **ttrace::eval** *arg* ?arg . . .? **ttrace::enable** **ttrace::disable** **ttrace::cleanup** **ttrace::update** ?epoch? **ttrace::getscript** **ttrace::atenable** *cmd* *arglist* *body* **ttrace::atdisable** *cmd* *arglist* *body* **ttrace::addtrace** *cmd* *arglist* *body* **ttrace::addscript** *name* *body* **ttrace::addresolver** *cmd* *arglist* *body* **ttrace::addcleanup** *body* **ttrace::addentry** *cmd* *var* *val* **ttrace::getentry** *cmd* *var* **ttrace::getentries** *cmd* ?pattern? **ttrace::delentry** *cmd* **ttrace::preload** *cmd* Description ----------- This package creates a framework for on-demand replication of the interpreter state accross threads in an multithreading application . It relies on the mechanics of Tcl command tracing and the Tcl **[unknown](../tclcmd/unknown.htm)** command and mechanism . The package requires Tcl threading extension but can be alternatively used stand-alone within the AOLserver, a scalable webserver from America Online . In a nutshell, a short sample illustrating the usage of the ttrace with the Tcl threading extension: ``` % package require Ttrace 2 .8 .0 % set t1 [thread::create {package require Ttrace; thread::wait}] tid0x1802800 % ttrace::eval {proc test args {return test-[thread::id]}} % thread::send $t1 test test-tid0x1802800 % set t2 [thread::create {package require Ttrace; thread::wait}] tid0x1804000 % thread::send $t2 test test-tid0x1804000 ``` As seen from above, the **ttrace::eval** and **ttrace::update** commands are used to create a thread-wide definition of a simple Tcl procedure and replicate that definition to all, already existing or later created, threads . User commands ------------- This section describes user-level commands . Those commands can be used by script writers to control the execution of the tracing framework . **ttrace::eval** *arg* ?arg . . .? This command concatenates given arguments and evaluates the resulting Tcl command with trace framework enabled . If the command execution was ok, it takes necessary steps to automatically propagate the trace epoch change to all threads in the application . For AOLserver, only newly created threads actually receive the epoch change . For the Tcl threading extension, all threads created by the extension are automatically updated . If the command execution resulted in Tcl error, no state propagation takes place . This is the most important user-level command of the package as it wraps most of the commands described below . This greatly simplifies things, because user need to learn just this (one) command in order to effectively use the package . Other commands, as desribed below, are included mostly for the sake of completeness . **ttrace::enable** Activates all registered callbacks in the framework and starts a new trace epoch . The trace epoch encapsulates all changes done to the interpreter during the time traces are activated . **ttrace::disable** Deactivates all registered callbacks in the framework and closes the current trace epoch . **ttrace::cleanup** Used to clean-up all on-demand loaded resources in the interpreter . It effectively brings Tcl interpreter to its pristine state . **ttrace::update** ?epoch? Used to refresh the state of the interpreter to match the optional trace ?epoch? . If the optional ?epoch? is not given, it takes the most recent trace epoch . **ttrace::getscript** Returns a synthetized Tcl script which may be sourced in any interpreter . This script sets the stage for the Tcl **[unknown](../tclcmd/unknown.htm)** command so it can load traced resources from the in-memory database . Normally, this command is automatically invoked by other higher-level commands like **ttrace::eval** and **ttrace::update** . Callback commands ----------------- A word upfront: the package already includes callbacks for tracing following Tcl commands: **[proc](../tclcmd/proc.htm)**, **[namespace](../tclcmd/namespace.htm)**, **[variable](../tclcmd/variable.htm)**, **[load](../tclcmd/load.htm)**, and **[rename](../tclcmd/rename.htm)** . Additionaly, a set of callbacks for tracing resources (object, clasess) for the XOTcl v1 .3 .8+, an OO-extension to Tcl, is also provided . This gives a solid base for solving most of the real-life needs and serves as an example for people wanting to customize the package to cover their specific needs . Below, you can find commands for registering callbacks in the framework and for writing callback scripts . These callbacks are invoked by the framework in order to gather interpreter state changes, build in-memory database, perform custom-cleanups and various other tasks . **ttrace::atenable** *cmd* *arglist* *body* Registers Tcl callback to be activated at **ttrace::enable** . Registered callbacks are activated on FIFO basis . The callback definition includes the name of the callback, *cmd*, a list of callback arguments, *arglist* and the *body* of the callback . Effectively, this actually resembles the call interface of the standard Tcl **[proc](../tclcmd/proc.htm)** command . **ttrace::atdisable** *cmd* *arglist* *body* Registers Tcl callback to be activated at **ttrace::disable** . Registered callbacks are activated on FIFO basis . The callback definition includes the name of the callback, *cmd*, a list of callback arguments, *arglist* and the *body* of the callback . Effectively, this actually resembles the call interface of the standard Tcl **[proc](../tclcmd/proc.htm)** command . **ttrace::addtrace** *cmd* *arglist* *body* Registers Tcl callback to be activated for tracing the Tcl **cmd** command . The callback definition includes the name of the Tcl command to trace, *cmd*, a list of callback arguments, *arglist* and the *body* of the callback . Effectively, this actually resembles the call interface of the standard Tcl **[proc](../tclcmd/proc.htm)** command . **ttrace::addscript** *name* *body* Registers Tcl callback to be activated for building a Tcl script to be passed to other interpreters . This script is used to set the stage for the Tcl **[unknown](../tclcmd/unknown.htm)** command . Registered callbacks are activated on FIFO basis . The callback definition includes the name of the callback, *name* and the *body* of the callback . **ttrace::addresolver** *cmd* *arglist* *body* Registers Tcl callback to be activated by the overloaded Tcl **[unknown](../tclcmd/unknown.htm)** command . Registered callbacks are activated on FIFO basis . This callback is used to resolve the resource and load the resource in the current interpreter . **ttrace::addcleanup** *body* Registers Tcl callback to be activated by the **trace::cleanup** . Registered callbacks are activated on FIFO basis . **ttrace::addentry** *cmd* *var* *val* Adds one entry to the named in-memory database . **ttrace::getentry** *cmd* *var* Returns the value of the entry from the named in-memory database . **ttrace::getentries** *cmd* ?pattern? Returns names of all entries from the named in-memory database . **ttrace::delentry** *cmd* Deletes an entry from the named in-memory database . **ttrace::preload** *cmd* Registers the Tcl command to be loaded in the interpreter . Commands registered this way will always be the part of the interpreter and not be on-demand loaded by the Tcl **[unknown](../tclcmd/unknown.htm)** command . Discussion ---------- Common introspective state-replication approaches use a custom Tcl script to introspect the running interpreter and synthesize another Tcl script to replicate this state in some other interpreter . This package, on the contrary, uses Tcl command traces . Command traces are registered on selected Tcl commands, like **[proc](../tclcmd/proc.htm)**, **[namespace](../tclcmd/namespace.htm)**, **[load](../tclcmd/load.htm)** and other standard (and/or user-defined) Tcl commands . When activated, those traces build an in-memory database of created resources . This database is used as a resource repository for the (overloaded) Tcl **[unknown](../tclcmd/unknown.htm)** command which creates the requested resource in the interpreter on demand . This way, users can update just one interpreter (master) in one thread and replicate that interpreter state (or part of it) to other threads/interpreters in the process . Immediate benefit of such approach is the much smaller memory footprint of the application and much faster thread creation . By not actually loading all necessary procedures (and other resources) in every thread at the thread initialization time, but by deffering this to the time the resource is actually referenced, significant improvements in both memory consumption and thread initialization time can be achieved . Some tests have shown that memory footprint of an multithreading Tcl application went down more than three times and thread startup time was reduced for about 50 times . Note that your mileage may vary . Other benefits include much finer control about what (and when) gets replicated from the master to other Tcl thread/interpreters . See also -------- **[thread](thread.htm)**, **[tpool](tpool.htm)**, **[tsv](tsv.htm)**
programming_docs
tcl_tk TdbcCmd TdbcCmd ======= | | | | | --- | --- | --- | | [tdbc](tdbc.htm "Tcl Database Connectivity") | [tdbc::mapSqlState](tdbc_mapsqlstate.htm "Map SQLSTATE to error class") | [tdbc::statement](tdbc_statement.htm "TDBC statement object") | | [tdbc::connection](tdbc_connection.htm "TDBC connection object") | [tdbc::resultset](tdbc_resultset.htm "TDBC result set object") | [tdbc::tokenize](tdbc_tokenize.htm "TDBC SQL tokenizer") | Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TdbcCmd/contents.htm> tcl_tk tdbc_mapSqlState tdbc\_mapSqlState ================= Name ---- tdbc::mapSqlState — Map SQLSTATE to error class Synopsis -------- package require **tdbc 1.0** **tdbc::mapSqlState** *sqlstate* Description ----------- The **tdbc::mapSqlState** command accepts a string that is expected to be a five-character 'SQL state' as returned from a SQL database when an error occurs. It examines the first two characters of the string, and returns an error class as a human- and machine-readable name (for example, **FEATURE\_NOT\_SUPPORTED**, **DATA\_EXCEPTION** or **INVALID\_CURSOR\_STATE**). The TDBC specification requires database drivers to return a description of an error in the error code when an error occurs. The description is a string that has at least four elements: "**[TDBC](tdbc.htm)** *errorClass* *sqlstate* *driverName* *details...*". The **tdbc::mapSqlState** command gives a convenient way for a TDBC driver to generate the *errorClass* element given the SQL state returned from a database. See also -------- **[tdbc](tdbc.htm)**, **[tdbc::tokenize](tdbc_tokenize.htm)**, **[tdbc::connection](tdbc_connection.htm)**, **[tdbc::statement](tdbc_statement.htm)**, **[tdbc::resultset](tdbc_resultset.htm)** Copyright --------- Copyright (c) 2009 by Kevin B. Kenny. Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TdbcCmd/tdbc_mapSqlState.htm> tcl_tk tdbc_connection tdbc\_connection ================ [NAME](tdbc_connection.htm#M2) tdbc::connection — TDBC connection object [SYNOPSIS](tdbc_connection.htm#M3) [DESCRIPTION](tdbc_connection.htm#M4) [**type**](tdbc_connection.htm#M5) [**precision**](tdbc_connection.htm#M6) [**scale**](tdbc_connection.htm#M7) [**nullable**](tdbc_connection.htm#M8) [**tableCatalog**](tdbc_connection.htm#M9) [**tableSchema**](tdbc_connection.htm#M10) [**tableName**](tdbc_connection.htm#M11) [**constraintCatalog**](tdbc_connection.htm#M12) [**constraintSchema**](tdbc_connection.htm#M13) [**constraintName**](tdbc_connection.htm#M14) [**columnName**](tdbc_connection.htm#M15) [**ordinalPosition**](tdbc_connection.htm#M16) [**foreignConstraintCatalog**](tdbc_connection.htm#M17) [**foreignConstraintSchema**](tdbc_connection.htm#M18) [**foreignConstraintName**](tdbc_connection.htm#M19) [**primaryConstraintCatalog**](tdbc_connection.htm#M20) [**primaryConstraintSchema**](tdbc_connection.htm#M21) [**primaryConstraintName**](tdbc_connection.htm#M22) [**updateAction**](tdbc_connection.htm#M23) [**deleteAction**](tdbc_connection.htm#M24) [**primaryCatalog**](tdbc_connection.htm#M25) [**primarySchema**](tdbc_connection.htm#M26) [**primaryTable**](tdbc_connection.htm#M27) [**primaryColumn**](tdbc_connection.htm#M28) [**foreignCatalog**](tdbc_connection.htm#M29) [**foreignSchema**](tdbc_connection.htm#M30) [**foreignTable**](tdbc_connection.htm#M31) [**foreignColumn**](tdbc_connection.htm#M32) [**ordinalPosition**](tdbc_connection.htm#M33) [CONFIGURATION OPTIONS](tdbc_connection.htm#M34) [**-encoding** *name*](tdbc_connection.htm#M35) [**-isolation** *level*](tdbc_connection.htm#M36) [**-timeout** *ms*](tdbc_connection.htm#M37) [**-readonly** *flag*](tdbc_connection.htm#M38) [TRANSACTION ISOLATION LEVELS](tdbc_connection.htm#M39) [**readuncommitted**](tdbc_connection.htm#M40) [**readcommitted**](tdbc_connection.htm#M41) [**repeatableread**](tdbc_connection.htm#M42) [**serializable**](tdbc_connection.htm#M43) [**readonly**](tdbc_connection.htm#M44) [SEE ALSO](tdbc_connection.htm#M45) [KEYWORDS](tdbc_connection.htm#M46) [COPYRIGHT](tdbc_connection.htm#M47) Name ---- tdbc::connection — TDBC connection object Synopsis -------- package require **tdbc 1.0** package require **tdbc::***driver version* **tdbc::***driver***::connection create** *db* ?*-option value*...? *db* **configure** ?*-option value*...? *db* **[close](../tclcmd/close.htm)** *db* **foreignkeys** ?*-primary tableName*? ?*-foreign tableName*? *db* **prepare** *sql-code* *db* **preparecall** *call* *db* **primarykeys** *tableName* *db* **statements** *db* **resultsets** *db* **tables** ?*pattern*? *db* **columns** *table* ?*pattern*? *db* **begintransaction** *db* **commit** *db* **rollback** *db* **transaction** *script* *db* **allrows** ?**-as lists**|**dicts**? ?**-columnsvariable** *name*? ?**--**? *sql-code* ?*dictionary*? *db* **[foreach](../tclcmd/foreach.htm)** ?**-as lists**|**dicts**? ?**-columnsvariable** *name*? ?--? *varName sqlcode* ?*dictionary*? *script* Description ----------- Every database driver for TDBC (Tcl DataBase Connectivity) implements a *connection* object that represents a connection to a database. By convention, this object is created by the command, **tdbc::***driver***::connection create**. This command accepts the name of a Tcl command that will represent the connection and a possible set of options (see **CONFIGURATION OPTIONS**). It establishes a connection to the database and returns the name of the newly-created Tcl command. The **configure** object command on a database connection, if presented with no arguments, returns a list of alternating keywords and values representing the connection's current configuration. If presented with a single argument *-option*, it returns the configured value of the given option. Otherwise, it must be given an even number of arguments which are alternating options and values. The specified options receive the specified values, and nothing is returned. The **[close](../tclcmd/close.htm)** object command on a database connection closes the connection. All active statements and result sets on the connection are closed. Any uncommitted transaction is rolled back. The object command is deleted. The **prepare** object command on a database connection prepares a SQL statement for execution. The *sql-code* argument must contain a single SQL statement to be executed. Bound variables may be included. The return value is a newly-created Tcl command that represents the statement. See **[tdbc::statement](tdbc_statement.htm)** for more detailed discussion of the SQL accepted by the **prepare** object command and the interface accepted by a statement. On a database connection where the underlying database and driver support stored procedures, the **preparecall** object command prepares a call to a stored procedure for execution. The syntax of the stored procedure call is: ``` ?*resultvar* =? *procname*(?*arg* ?, *arg*...?) ``` The return value is a newly-created Tcl command that represents the statement. See **[tdbc::statement](tdbc_statement.htm)** for the interface accepted by a statement. The **statements** object command returns a list of statements that have been created by **prepare** and **preparecall** statements against the given connection and have not yet been closed. The **resultsets** object command returns a list of result sets that have been obtained by executing statements prepared using the given connection and not yet closed. The **tables** object command allows the program to query the connection for the names of tables that exist in the database. The optional *pattern* parameter is a pattern to match the name of a table. It may contain the SQL wild-card characters '**%**' and and whose values are subdictionaries. See the documentation for the individual database driver for the interpretation of the values. The **columns** object command allows the program to query the connection for the names of columns that exist in a given table. The optional **pattern** parameter is a pattern to match the name of a column. It may contain the SQL wild-card characters '**%**' and and whose values are dictionaries. Each of the subdictionaries will contain at least the following keys and values (and may contain others whose usage is determined by a specific database driver). **type** Contains the data type of the column, and will generally be chosen from the set, **bigint**, **[binary](../tclcmd/binary.htm)**, **bit**, **char**, **date**, **decimal**, **double**, **float**, **integer**, **longvarbinary**, **longvarchar**, **numeric**, **real**, **[time](../tclcmd/time.htm)**, **timestamp**, **smallint**, **tinyint**, **varbinary**, and **varchar**. (If the column has a type that cannot be represented as one of the above, **type** will contain a driver-dependent description of the type.) **precision** Contains the precision of the column in bits, decimal digits, or the width in characters, according to the type. **scale** Contains the scale of the column (the number of digits after the radix point), for types that support the concept. **nullable** Contains 1 if the column can contain NULL values, and 0 otherwise. The **primarykeys** object command allows the program to query the connection for the primary keys belonging to a given table. The *tableName* parameter identifies the table being interrogated. The result is a list of dictionaries enumerating the keys (in a similar format to the list returned by *$connection* **allrows -as dicts**). The keys of the dictionary may include at least the following. Values that are NULL or meaningless in a given database are omitted. **tableCatalog** Name of the catalog in which the table appears. **tableSchema** Name of the schema in which the table appears. **tableName** Name of the table owning the primary key. **constraintCatalog** Name of the catalog in which the primary key constraint appears. In some database systems, this may not be the same as the table's catalog. **constraintSchema** Name of the schema in which the primary key constraint appears. In some database systems, this may not be the same as the table's schema. **constraintName** Name of the primary key constraint, **columnName** Name of a column that is a member of the primary key. **ordinalPosition** Ordinal position of the column within the primary key. To these columns may be added additional ones that are specific to a particular database system. The **foreignkeys** object command allows the program to query the connection for foreign key relationships that apply to a particular table. The relationships may be constrained to the keys that appear in a particular table (**-foreign** *tableName*), the keys that refer to a particular table (**-primary** *tableName*), or both. At least one of **-primary** and **-foreign** should be specified, although some drivers will enumerate all foreign keys in the current catalog if both options are omitted. The result of the **foreignkeys** object command is a list of dictionaries, with one list element per key (in a similar format to the list returned by *$connection* **allrows -as dicts**). The keys of the dictionary may include at least the following. Values that are NULL or meaningless in a given database are omitted. **foreignConstraintCatalog** Catalog in which the foreign key constraint appears. **foreignConstraintSchema** Schema in which the foreign key constraint appears. **foreignConstraintName** Name of the foreign key constraint. **primaryConstraintCatalog** Catalog holding the primary key constraint (or unique key constraint) on the column to which the foreign key refers. **primaryConstraintSchema** Schema holding the primary key constraint (or unique key constraint) on the column to which the foreign key refers. **primaryConstraintName** Name of the primary key constraint (or unique key constraint) on the column to which the foreign key refers. **updateAction** Action to take when an UPDATE statement invalidates the constraint. The value will be **CASCADE**, **SET DEFAULT**, **SET NULL**, **RESTRICT**, or **NO ACTION**. **deleteAction** Action to take when a DELETE statement invalidates the constraint. The value will be **CASCADE**, **SET DEFAULT**, **SET NULL**, **RESTRICT**, or **NO ACTION**. **primaryCatalog** Catalog name in which the primary table (the one to which the foreign key refers) appears. **primarySchema** Schema name in which the primary table (the one to which the foreign key refers) appears. **primaryTable** Table name of the primary table (the one to which the foreign key refers). **primaryColumn** Name of the column to which the foreign key refers. **foreignCatalog** Name of the catalog in which the table containing the foreign key appears. **foreignSchema** Name of the schema in which the table containing the foreign key appears. **foreignTable** Name of the table containing the foreign key. **foreignColumn** Name of the column appearing in the foreign key. **ordinalPosition** Position of the column in the foreign key, if the key is a compound key. The **begintransaction** object command on a database connection begins a transaction on the database. If the underlying database does not support atomic, consistent, isolated, durable transactions, the **begintransaction** object command returns an error reporting the fact. Similarly, if multiple **begintransaction** commands are executed withough an intervening **commit** or **rollback** command, an error is returned unless the underlying database supports nested transactions. The **commit** object command on a database connection ends the most recent transaction started by **begintransaction** and commits changes to the database. The **rollback** object command on a database connection rolls back the most recent transaction started by **begintransaction**. The state of the database is as if nothing happened during the transaction. The **transaction** object command on a database connection presents a simple way of bundling a database transaction. It begins a transaction, and evaluates the supplied *script* argument as a Tcl script in the caller's scope. If *script* terminates normally, or by **[break](../tclcmd/break.htm)**, **[continue](../tclcmd/continue.htm)**, or **[return](../tclcmd/return.htm)**, the transaction is committed (and any action requested by **[break](../tclcmd/break.htm)**, **[continue](../tclcmd/continue.htm)**, or **[return](../tclcmd/return.htm)** takes place). If the commit fails for any reason, the error in the commit is treated as an error in the *script*. In the case of an error in *script* or in the commit, the transaction is rolled back and the error is rethrown. Any nonstandard return code from the script causes the transaction to be rolled back and then is rethrown. The **allrows** object command prepares a SQL statement (given by the *sql-code* parameter) to execute against the database. It then executes it (see **[tdbc::statement](tdbc_statement.htm)** for details) with the optional *dictionary* parameter giving bind variables. Finally, it uses the *allrows* object command on the result set (see **[tdbc::resultset](tdbc_resultset.htm)**) to construct a list of the results. Finally, both result set and statement are closed. The return value is the list of results. The **[foreach](../tclcmd/foreach.htm)** object command prepares a SQL statement (given by the *sql-code* parameter) to execute against the database. It then executes it (see **[tdbc::statement](tdbc_statement.htm)** for details) with the optional *dictionary* parameter giving bind variables. Finally, it uses the *foreach* object command on the result set (see **[tdbc::resultset](tdbc_resultset.htm)**) to evaluate the given *script* for each row of the results. Finally, both result set and statement are closed, even if the given *script* results in a **[return](../tclcmd/return.htm)**, an error, or an unusual return code. Configuration options --------------------- The configuration options accepted when the connection is created and on the connection's **configure** object command include the following, and may include others specific to a database driver. **-encoding** *name* Specifies the encoding to be used in connecting to the database. The *name* should be one of the names accepted by the **[encoding](../tclcmd/encoding.htm)** command. This option is usually unnecessary; most database drivers can figure out the encoding in use by themselves. **-isolation** *level* Specifies the transaction isolation level needed for transactions on the database. The acceptable values for *level* are shown under **[TRANSACTION ISOLATION LEVELS](#M39)**. **-timeout** *ms* Specifies the maximum time to wait for a an operation database engine before reporting an error to the caller. The *ms* argument gives the maximum time in milliseconds. A value of zero (the default) specifies that the calling process is to wait indefinitely for database operations. **-readonly** *flag* Specifies that the connection will not modify the database (if the Boolean parameter *flag* is true), or that it may modify the database (if *flag* is false). If *flag* is true, this option may have the effect of raising the transaction isolation level to *readonly*. ### Transaction isolation levels The acceptable values for the **-isolation** configuration option are as follows: **readuncommitted** Allows the transaction to read "dirty", that is, uncommitted data. This isolation level may compromise data integrity, does not guarantee that foreign keys or uniqueness constraints are satisfied, and in general does not guarantee data consistency. **readcommitted** Forbids the transaction from reading "dirty" data, but does not guarantee repeatable reads; if a transaction reads a row of a database at a given time, there is no guarantee that the same row will be available at a later time in the same transaction. **repeatableread** Guarantees that any row of the database, once read, will have the same values for the life of a transaction. Still permits "phantom reads" (that is, newly-added rows appearing if a table is queried a second time). **serializable** The most restrictive (and most expensive) level of transaction isolation. Any query to the database, if repeated, will return precisely the same results for the life of the transaction, exactly as if the transaction is the only user of the database. **readonly** Behaves like **serializable** in that the only results visible to the transaction are those that were committed prior to the start of the transaction, but forbids the transaction from modifying the database. A database that does not implement one of these isolation levels will instead use the next more restrictive isolation level. If the given level of isolation cannot be obtained, the database interface throws an error reporting the fact. The default isolation level is **readcommitted**. A script should not the isolation level when a transaction is in progress. See also -------- **[encoding](../tclcmd/encoding.htm)**, **[tdbc](tdbc.htm)**, **[tdbc::resultset](tdbc_resultset.htm)**, **[tdbc::statement](tdbc_statement.htm)**, **[tdbc::tokenize](tdbc_tokenize.htm)** Copyright --------- Copyright (c) 2008 by Kevin B. Kenny. Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TdbcCmd/tdbc_connection.htm> tcl_tk tdbc_tokenize tdbc\_tokenize ============== Name ---- tdbc::tokenize — TDBC SQL tokenizer Synopsis -------- package require **tdbc 1.0** **tdbc::tokenize** *string* Description ----------- As a convenience to database drivers, Tcl Database Connectivity (TDBC) provides a command to break SQL code apart into tokens so that bound variables can readily be identified and substituted. The **tdbc::tokenize** command accepts as its parameter a string that is expected to contain one or more SQL statements. It returns a list of substrings; concatenating these substrings together will yield the original string. Each substring is one of the following: 1. A bound variable, which begins with one of the characters '**:**', '**@**', or '**$**'. The remainder of the string is the variable name and will consist of alphanumeric characters and underscores. (The leading character will be be non-numeric.) 2. A semicolon that separates two SQL statements. 3. Something else in a SQL statement. The tokenizer does not attempt to parse SQL; it merely identifies bound variables (distinguishing them from similar strings appearing inside quotes or comments) and statement delimiters. See also -------- **[tdbc](tdbc.htm)**, **[tdbc::connection](tdbc_connection.htm)**, **[tdbc::statement](tdbc_statement.htm)**, **[tdbc::resultset](tdbc_resultset.htm)** Copyright --------- Copyright (c) 2008 by Kevin B. Kenny. Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TdbcCmd/tdbc_tokenize.htm>
programming_docs
tcl_tk tdbc tdbc ==== Name ---- tdbc — Tcl Database Connectivity Synopsis -------- package require **tdbc 1.0** package require **tdbc::***driver version* **tdbc::***driver***::connection create** *db* ?*-option value*...? Description ----------- Tcl Database Connectivity (TDBC) is a common interface for Tcl programs to access SQL databases. It is implemented by a series of database *drivers*: separate modules, each of which adapts Tcl to the interface of one particular database system. All of the drivers implement a common series of commands for manipulating the database. These commands are all named dynamically, since they all represent objects in the database system. They include **connections,** which represent connections to a database; **statements,** which represent SQL statements, and **result sets,** which represent the sets of rows that result from executing statements. All of these have manual pages of their own, listed under **[SEE ALSO](#M5)**. In addition, TDBC itself has a few service procedures that are chiefly of interest to driver writers. **[SEE ALSO](#M5)** also enumerates them. See also -------- **[Tdbc\_Init](https://www.tcl.tk/man/tcl/TdbcLib/Tdbc_Init.htm)**, **[tdbc::connection](tdbc_connection.htm)**, **tdbc::mapSqlState**, **[tdbc::resultset](tdbc_resultset.htm)**, **[tdbc::statement](tdbc_statement.htm)**, **[tdbc::tokenize](tdbc_tokenize.htm)**, **[tdbc::mysql](../tdbcmysqlcmd/tdbc_mysql.htm)**, **[tdbc::odbc](../tdbcodbccmd/tdbc_odbc.htm)**, **[tdbc::postgres](../tdbcpostgrescmd/tdbc_postgres.htm)**, **[tdbc::sqlite3](../tdbcsqlitecmd/tdbc_sqlite3.htm)** Copyright --------- Copyright (c) 2008 by Kevin B. Kenny. Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TdbcCmd/tdbc.htm> tcl_tk tdbc_statement tdbc\_statement =============== [NAME](tdbc_statement.htm#M2) tdbc::statement — TDBC statement object [SYNOPSIS](tdbc_statement.htm#M3) [DESCRIPTION](tdbc_statement.htm#M4) [**direction**](tdbc_statement.htm#M5) [**type**](tdbc_statement.htm#M6) [**precision**](tdbc_statement.htm#M7) [**scale**](tdbc_statement.htm#M8) [**nullable**](tdbc_statement.htm#M9) [EXAMPLES](tdbc_statement.htm#M10) [SEE ALSO](tdbc_statement.htm#M11) [KEYWORDS](tdbc_statement.htm#M12) [COPYRIGHT](tdbc_statement.htm#M13) Name ---- tdbc::statement — TDBC statement object Synopsis -------- package require **tdbc 1.0** package require **tdbc::***driver version* **tdbc::***driver***::connection create** *db* *?-option value*...? **[set](../tclcmd/set.htm)** *stmt* **[***db* **prepare** *sql-code***]** **[set](../tclcmd/set.htm)** *stmt* **[***db* **preparecall** *call***]** *$stmt* **params** *$stmt* **paramtype** ?*direction*? *type* ?*precision*? ?*scale*? *$stmt* **execute** ?*dict*? *$stmt* **resultsets** *$stmt* **allrows** ?**-as lists|dicts**? ?**-columnsvariable** *name*? ?**--**? ?*dict* *$stmt* **[foreach](../tclcmd/foreach.htm)** ?**-as lists|dicts**? ?**-columnsvariable** *name*? ?**--**? *varName* ?*dict*? *script* *$stmt* **[close](../tclcmd/close.htm)** Description ----------- Every database driver for TDBC (Tcl DataBase Connectivity) implements a *statement* object that represents a SQL statement in a database. Instances of this object are created by executing the **prepare** or **preparecall** object command on a database connection. The **prepare** object command against the connection accepts arbitrary SQL code to be executed against the database. The SQL code may contain *bound variables*, which are strings of alphanumeric characters or underscores (the first character of the string may not be numeric), prefixed with a colon (**:**). If a bound variable appears in the SQL statement, and is not in a string set off by single or double quotes, nor in a comment introduced by **--**, it becomes a value that is substituted when the statement is executed. A bound variable becomes a single value (string or numeric) in the resulting statement. *Drivers are responsible for ensuring that the mechanism for binding variables prevents SQL injection.* The **preparecall** object command against the connection accepts a stylized statement in the form: ``` *procname* **(**?**:***varname*? ?**,:***varname*...?**)** ``` or ``` *varname* **=** *procname* **(**?**:***varname*? ?**,:***varname*...?**)** ``` This statement represents a call to a stored procedure *procname* in the database. The variable name to the left of the equal sign (if present), and all variable names that are parameters inside parentheses, become bound variables. The **params** method against a statement object enumerates the bound variables that appear in the statement. The result returned from the **params** method is a dictionary whose keys are the names of bound variables (listed in the order in which the variables first appear in the statement), and whose values are dictionaries. The subdictionaries include at least the following keys (database drivers may add additional keys that are not in this list). **direction** Contains one of the keywords, **in**, **out** or **inout** according to whether the variable is an input to or output from the statement. Only stored procedure calls will have **out** or **inout** parameters. **type** Contains the data type of the column, and will generally be chosen from the set, **bigint**, **[binary](../tclcmd/binary.htm)**, **bit**, **char**, **date**, **decimal**, **double**, **float**, **integer**, **longvarbinary**, **longvarchar**, **numeric**, **real**, **[time](../tclcmd/time.htm)**, **timestamp**, **smallint**, **tinyint**, **varbinary**, and **varchar**. (If the variable has a type that cannot be represented as one of the above, **type** will contain a driver-dependent description of the type.) **precision** Contains the precision of the column in bits, decimal digits, or the width in characters, according to the type. **scale** Contains the scale of the column (the number of digits after the radix point), for types that support the concept. **nullable** Contains 1 if the column can contain NULL values, and 0 otherwise. The **paramtype** object command allows the script to specify the type and direction of parameter transmission of a variable in a statement. (Some databases provide no method to determine this information automatically and place the burden on the caller to do so.) The *direction*, *type*, *precision*, *scale*, and *nullable* arguments have the same meaning as the corresponding dictionary values in the **params** object command. The **execute** object command executes the statement. Prior to executing the statement, values are provided for the bound variables that appear in it. If the *dict* parameter is supplied, it is searched for a key whose name matches the name of the bound variable. If the key is present, its value becomes the substituted variable. If not, the value of the substituted variable becomes a SQL NULL. If the *dict* parameter is *not* supplied, the **execute** object command searches for a variable in the caller's scope whose name matches the name of the bound variable. If one is found, its value becomes the bound variable's value. If none is found, the bound variable is assigned a SQL NULL as its value. Once substitution is finished, the resulting statement is executed. The return value is a result set object (see **[tdbc::resultset](tdbc_resultset.htm)** for details). The **resultsets** method returns a list of all the result sets that have been returned by executing the statement and have not yet been closed. The **allrows** object command executes the statement as with the **execute** object command, accepting an optional *dict* parameter giving bind variables. After executing the statement, it uses the *allrows* object command on the result set (see **[tdbc::resultset](tdbc_resultset.htm)**) to construct a list of the results. Finally, the result set is closed. The return value is the list of results. The **[foreach](../tclcmd/foreach.htm)** object command executes the statement as with the **execute** object command, accepting an optional *dict* parameter giving bind variables. After executing the statement, it uses the *foreach* object command on the result set (see **[tdbc::resultset](tdbc_resultset.htm)**) to evaluate the given *script* for each row of the results. Finally, the result set is closed, even if the given *script* results in a **[return](../tclcmd/return.htm)**, an error, or an unusual return code. The **[close](../tclcmd/close.htm)** object command removes a statement and any result sets that it has created. All system resources associated with the objects are freed. Examples -------- The following code would look up a telephone number in a directory, assuming an appropriate SQL schema: ``` package require tdbc::sqlite3 tdbc::sqlite3::connection create db phonebook.sqlite3 set statement [db prepare { select phone_num from directory where first_name = :firstname and last_name = :lastname }] set firstname Fred set lastname Flintstone $statement foreach row { puts [dict get $row phone_num] } $statement close db close ``` See also -------- **[encoding](../tclcmd/encoding.htm)**, **[tdbc](tdbc.htm)**, **[tdbc::connection](tdbc_connection.htm)**, **[tdbc::resultset](tdbc_resultset.htm)**, **[tdbc::tokenize](tdbc_tokenize.htm)** Copyright --------- Copyright (c) 2008 by Kevin B. Kenny. Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TdbcCmd/tdbc_statement.htm> tcl_tk tdbc_resultset tdbc\_resultset =============== Name ---- tdbc::resultset — TDBC result set object Synopsis -------- package require **tdbc 1.0** package require **tdbc::***driver version* **tdbc::***driver***::connection create** *db* *?-option value*...? **[set](../tclcmd/set.htm)** *stmt* **[***db* **prepare** *sql-code***]** **[set](../tclcmd/set.htm)** *resultset* **[***$stmt* **execute** ?*args...*?**]** *$resultset* **columns** *$resultset* **rowcount** *$resultset* **nextrow** ?**-as** **lists**|**dicts**? ?**--**? *varname* *$resultset* **nextlist** *varname* *$resultset* **nextdict** *varname* *$resultset* **nextresults** *$resultset* **allrows** ?**-as lists|dicts**? ?**-columnsvariable** *name*? ?**--**? *$resultset* **[foreach](../tclcmd/foreach.htm)** ?**-as lists|dicts**? ?**-columnsvariable** *name*? ?**--**? *varname* *script* *$resultset* **[close](../tclcmd/close.htm)** Description ----------- Every database driver for TDBC (Tcl DataBase Connectivity) implements a *result set* object that represents a the results returned from executing SQL statement in a database. Instances of this object are created by executing the **execute** object command on a statement object. The **columns** obect command returns a list of the names of the columns in the result set. The columns will appear in the same order as they appeared in the SQL statement that performed the database query. If the SQL statement does not return a set of columns (for instance, if it is an INSERT, UPDATE, or DELETE statement), the **columns** command will return an empty list. The **rowcount** object command returns the number of rows in the database that were affected by the execution of an INSERT, UPDATE or DELETE statement. For a SELECT statement, the row count is unspecified. The **nextlist** object command sets the variable given by *varname* in the caller's scope to the next row of the results, expressed as a list of column values. NULL values are replaced by empty strings. The columns of the result row appear in the same order in which they appeared on the SELECT statement. The return of **nextlist** is **1** if the operation succeeded, and **0** if the end of the result set was reached. The **nextdict** object command sets the variable given by *varname* in the caller's scope to the next row of the results, expressed as a dictionary. The dictionary's keys are column names, and the values are the values of those columns in the row. If a column's value in the row is NULL, its key is omitted from the dictionary. The keys appear in the dictionary in the same order in which the columns appeared on the SELECT statement. The return of **nextdict** is **1** if the operation succeeded, and **0** if the end of the result set was reached. The **nextrow** object command is precisely equivalent to the **nextdict** or **nextlist** object command, depending on whether **-as dicts** (the default) or **-as lists** is specified. Some databases support the idea of a single statement that returns multiple sets of results. The **nextresults** object command is executed, typically after the **nextlist** of **nextdict** object command has returned **0**, to advance to the next result set. It returns **1** if there is another result set to process, and **0** if the result set just processed was the last. After calling **nextresults** and getting the return value of **1**, the caller may once again call **columns** to get the column descriptions of the next result set, and then return to calling **nextdict** or **nextlist** to process the rows of the next result set. It is an error to call **columns**, **nextdict**, **nextlist** or **nextrow** after **nextresults** has returned **0**. The **allrows** object command sets the variable designated by the **-columnsvariable** option (if present) to the result of the **columns** object command. It then executes the **nextrow** object command repeatedly until the end of the result set is reached. If **nextresults** returns a nonzero value, it executes the above two steps (**columns** followed by iterated **nextrow** calls) as long as further results are available. The rows returned by **nextrow** are assembled into a Tcl list and become the return value of the **allrows** command; the last value returned from **columns** is what the application will see in **-columnsvariable**. The **[foreach](../tclcmd/foreach.htm)** object command sets the variable designated by the **-columnsvariable** option (if present) to the result of the **columns** object command. It then executes the **nextrow** object command repeatedly until the end of the result set is reached, storing the successive rows in the variable designated by *varName*. For each row, it executes the given *script*. If the script terminates with an error, the error is reported by the **[foreach](../tclcmd/foreach.htm)** command, and iteration stops. If the script performs a **[break](../tclcmd/break.htm)** operation, the iteration terminates prematurely. If the script performs a **[continue](../tclcmd/continue.htm)** operation, the iteration recommences with the next row. If the script performs a **[return](../tclcmd/return.htm)**, results are the same as if a script outside the control of **[foreach](../tclcmd/foreach.htm)** had returned. Any other unusual return code terminates the iteration and is reported from the **[foreach](../tclcmd/foreach.htm)**. Once **nextrow** returns **0**, the **[foreach](../tclcmd/foreach.htm)** object command tries to advance to the next result set using **nextresults**. If **nextresults** returns **1**, the above steps (**columns** and **nextrow**, with script invocation) are repeated as long as more result sets remain. The *script* will always see the correct description of the columns of the current result set in the variable designated byt **-columnsvariable**. At the end of the call, the variable designated by **-columnsvariable** will have the description of the columns of the last result set. The **[close](../tclcmd/close.htm)** object command deletes the result set and frees any associated system resources. See also -------- **[encoding](../tclcmd/encoding.htm)**, **[tdbc](tdbc.htm)**, **[tdbc::connection](tdbc_connection.htm)**, **[tdbc::statement](tdbc_statement.htm)**, **[tdbc::tokenize](tdbc_tokenize.htm)** Copyright --------- Copyright (c) 2008 by Kevin B. Kenny. Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TdbcCmd/tdbc_resultset.htm> tcl_tk TdbcsqliteCmd TdbcsqliteCmd ============= | | | --- | | [tdbc::sqlite3](tdbc_sqlite3.htm "TDBC driver for the SQLite3 database manager") | tcl_tk tdbc_sqlite3 tdbc\_sqlite3 ============= Name ---- tdbc::sqlite3 — TDBC driver for the SQLite3 database manager Synopsis -------- package require **tdbc::sqlite3 1.0** **tdbc::sqlite3::connection create** *db* *fileName* ?*-option value...*? Description ----------- The **tdbc::sqlite3** driver provides a database interface that conforms to Tcl DataBase Connectivity (TDBC) and allows a Tcl script to connect to a SQLite3 database. It is also provided as a worked example of how to write a database driver in Tcl, so that driver authors have a starting point for further development. Connection to a SQLite3 database is established by invoking **tdbc::sqlite3::connection create**, passing it a string to be used as the connection handle followed by the file name of the database. The side effect of **tdbc::sqlite3::connection create** is to create a new database connection.. As an alternative, **tdbc::sqlite::connection new** may be used to create a database connection with an automatically assigned name. The return value from **tdbc::sqlite::connection new** is the name that was chosen for the connection handle. See **tdbc::connection(n)** for the details of how to use the connection to manipulate a database. Configuration options --------------------- The standard configuration options **-encoding**, **-isolation**, **-readonly** and **-timeout** are all recognized, both on **tdbc::sqlite3::connection create** and on the **configure** method of the resulting connection. Since the encoding of a SQLite3 database is always well known, the **-encoding** option accepts only **utf-8** as an encoding and always returns **utf-8** for an encoding. The actual encoding may be set using a SQLite3 **PRAGMA** statement when creating a new database. Only the isolation levels **readuncommitted** and **serializable** are implemented. Other isolation levels are promoted to **serializable**. The **-readonly** flag is not implemented. **-readonly 0** is accepted silently, while **-readonly 1** reports an error. Bugs ---- If any column name is not unique among the columns in a result set, the results of **-as dicts** returns will be missing all but the rightmost of the duplicated columns. This limitation can be worked around by adding appropriate **AS** clauses to **SELECT** statements to ensure that all returned column names are unique. Plans are to fix this bug by using a C implementation of the driver, which will also improve performance significantly. See also -------- **[tdbc](../tdbccmd/tdbc.htm)**, **[tdbc::connection](../tdbccmd/tdbc_connection.htm)**, **[tdbc::resultset](../tdbccmd/tdbc_resultset.htm)**, **[tdbc::statement](../tdbccmd/tdbc_statement.htm)** Copyright --------- Copyright (c) 2008 by Kevin B. Kenny. tcl_tk UserCmd UserCmd ======= | | | | --- | --- | | [tclsh](tclsh.htm "Simple shell containing Tcl interpreter") | [wish](wish.htm "Simple windowing shell") | Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/UserCmd/contents.htm> tcl_tk wish wish ==== [NAME](wish.htm#M2) wish — Simple windowing shell [SYNOPSIS](wish.htm#M3) [OPTIONS](wish.htm#M4) [**-encoding** *name*](wish.htm#M5) [**-colormap** *new*](wish.htm#M6) [**-display** *display*](wish.htm#M7) [**-geometry** *geometry*](wish.htm#M8) [**-name** *name*](wish.htm#M9) [**-sync**](wish.htm#M10) [**-use** *id*](wish.htm#M11) [**-visual** *visual*](wish.htm#M12) [**--**](wish.htm#M13) [DESCRIPTION](wish.htm#M14) [OPTION PROCESSING](wish.htm#M15) [APPLICATION NAME AND CLASS](wish.htm#M16) [VARIABLES](wish.htm#M17) [**argc**](wish.htm#M18) [**argv**](wish.htm#M19) [**argv0**](wish.htm#M20) [**geometry**](wish.htm#M21) [**tcl\_interactive**](wish.htm#M22) [SCRIPT FILES](wish.htm#M23) [PROMPTS](wish.htm#M24) [SEE ALSO](wish.htm#M25) [KEYWORDS](wish.htm#M26) Name ---- wish — Simple windowing shell Synopsis -------- **wish** ?**-encoding** *name*? ?*fileName arg arg ...*? Options ------- **-encoding** *name* Specifies the encoding of the text stored in *fileName*. This option is only recognized prior to the *fileName* argument. **-colormap** *new* Specifies that the window should have a new private colormap instead of using the default colormap for the screen. **-display** *display* Display (and screen) on which to display window. **-geometry** *geometry* Initial geometry to use for window. If this option is specified, its value is stored in the **[geometry](../tkcmd/tkvars.htm)** global variable of the application's Tcl interpreter. **-name** *name* Use *name* as the title to be displayed in the window, and as the name of the interpreter for **[send](../tkcmd/send.htm)** commands. **-sync** Execute all X server commands synchronously, so that errors are reported immediately. This will result in much slower execution, but it is useful for debugging. **-use** *id* Specifies that the main window for the application is to be embedded in the window whose identifier is *id*, instead of being created as an independent toplevel window. *Id* must be specified in the same way as the value for the **-use** option for toplevel widgets (i.e. it has a form like that returned by the **[winfo id](../tkcmd/winfo.htm)** command). Note that on some platforms this will only work correctly if *id* refers to a Tk **[frame](../tkcmd/frame.htm)** or **[toplevel](../tkcmd/toplevel.htm)** that has its **-container** option enabled. **-visual** *visual* Specifies the visual to use for the window. *Visual* may have any of the forms supported by the **[Tk\_GetVisual](https://www.tcl.tk/man/tcl/TkLib/GetVisual.htm)** procedure. **--** Pass all remaining arguments through to the script's **[argv](../tclcmd/tclvars.htm)** variable without interpreting them. This provides a mechanism for passing arguments such as **-name** to a script instead of having **wish** interpret them. Description ----------- **Wish** is a simple program consisting of the Tcl command language, the Tk toolkit, and a main program that reads commands from standard input or from a file. It creates a main window and then processes Tcl commands. If **wish** is invoked with arguments, then the first few arguments, ?**-encoding** *name*? ?*fileName*?, specify the name of a script file, and, optionally, the encoding of the text data stored in that script file. A value for *fileName* is recognized if the appropriate argument does not start with “-”. If there are no arguments, or the arguments do not specify a *fileName*, then wish reads Tcl commands interactively from standard input. It will continue processing commands until all windows have been deleted or until end-of-file is reached on standard input. If there exists a file “**.wishrc**” in the home directory of the user, **wish** evaluates the file as a Tcl script just before reading the first command from standard input. If arguments to **wish** do specify a *fileName*, then *fileName* is treated as the name of a script file. **Wish** will evaluate the script in *fileName* (which presumably creates a user interface), then it will respond to events until all windows have been deleted. Commands will not be read from standard input. There is no automatic evaluation of “**.wishrc**” when the name of a script file is presented on the **wish** command line, but the script file can always **[source](../tclcmd/source.htm)** it if desired. Note that on Windows, the **wish***version***.exe** program varies from the **[tclsh](tclsh.htm)***version***.exe** program in an additional important way: it does not connect to a standard Windows console and is instead a windowed program. Because of this, it additionally provides access to its own **[console](../tkcmd/console.htm)** command. Option processing ----------------- **Wish** automatically processes all of the command-line options described in the **[OPTIONS](#M4)** summary above. Any other command-line arguments besides these are passed through to the application using the **[argc](../tclcmd/tclvars.htm)** and **[argv](../tclcmd/tclvars.htm)** variables described later. Application name and class -------------------------- The name of the application, which is used for purposes such as **[send](../tkcmd/send.htm)** commands, is taken from the **-name** option, if it is specified; otherwise it is taken from *fileName*, if it is specified, or from the command name by which **wish** was invoked. In the last two cases, if the name contains a “/” character, then only the characters after the last slash are used as the application name. The class of the application, which is used for purposes such as specifying options with a **RESOURCE\_MANAGER** property or .Xdefaults file, is the same as its name except that the first letter is capitalized. Variables --------- **Wish** sets the following Tcl variables: **argc** Contains a count of the number of *arg* arguments (0 if none), not including the options described above. **argv** Contains a Tcl list whose elements are the *arg* arguments that follow a **--** option or do not match any of the options described in **[OPTIONS](#M4)** above, in order, or an empty string if there are no such arguments. **argv0** Contains *fileName* if it was specified. Otherwise, contains the name by which **wish** was invoked. **geometry** If the **-geometry** option is specified, **wish** copies its value into this variable. If the variable still exists after *fileName* has been evaluated, **wish** uses the value of the variable in a **[wm geometry](../tkcmd/wm.htm)** command to set the main window's geometry. **tcl\_interactive** Contains 1 if **wish** is reading commands interactively (*fileName* was not specified and standard input is a terminal-like device), 0 otherwise. Script files ------------ If you create a Tcl script in a file whose first line is ``` **#!/usr/local/bin/wish** ``` then you can invoke the script file directly from your shell if you mark it as executable. This assumes that **wish** has been installed in the default location in /usr/local/bin; if it is installed somewhere else then you will have to modify the above line to match. Many UNIX systems do not allow the **#!** line to exceed about 30 characters in length, so be sure that the **wish** executable can be accessed with a short file name. An even better approach is to start your script files with the following three lines: ``` **#!/bin/sh # the next line restarts using wish \ exec wish "$0" ${1+"$@"}** ``` This approach has three advantages over the approach in the previous paragraph. First, the location of the **wish** binary does not have to be hard-wired into the script: it can be anywhere in your shell search path. Second, it gets around the 30-character file name limit in the previous approach. Third, this approach will work even if **wish** is itself a shell script (this is done on some systems in order to handle multiple architectures or operating systems: the **wish** script selects one of several binaries to run). The three lines cause both **sh** and **wish** to process the script, but the **[exec](../tclcmd/exec.htm)** is only executed by **sh**. **sh** processes the script first; it treats the second line as a comment and executes the third line. The **[exec](../tclcmd/exec.htm)** statement cause the shell to stop processing and instead to start up **wish** to reprocess the entire script. When **wish** starts up, it treats all three lines as comments, since the backslash at the end of the second line causes the third line to be treated as part of the comment on the second line. The end of a script file may be marked either by the physical end of the medium, or by the character, “\032” (“\u001a”, control-Z). If this character is present in the file, the **wish** application will read text up to but not including the character. An application that requires this character in the file may encode it as “\032”, “\x1a”, or “\u001a”; or may generate it by use of commands such as **[format](../tclcmd/format.htm)** or **[binary](../tclcmd/binary.htm)**. Prompts ------- When **wish** is invoked interactively it normally prompts for each command with “**%** ”. You can change the prompt by setting the variables **tcl\_prompt1** and **tcl\_prompt2**. If variable **tcl\_prompt1** exists then it must consist of a Tcl script to output a prompt; instead of outputting a prompt **wish** will evaluate the script in **tcl\_prompt1**. The variable **tcl\_prompt2** is used in a similar way when a newline is typed but the current command is not yet complete; if **tcl\_prompt2** is not set then no prompt is output for incomplete commands. See also -------- **[tclsh](tclsh.htm)**, **[toplevel](../tkcmd/toplevel.htm)**, **[Tk\_Main](https://www.tcl.tk/man/tcl/TkLib/Tk_Main.htm)**, **[Tk\_MainLoop](https://www.tcl.tk/man/tcl/TkLib/MainLoop.htm)**, **[Tk\_MainWindow](https://www.tcl.tk/man/tcl/TkLib/MainWin.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/UserCmd/wish.htm>
programming_docs
tcl_tk tclsh tclsh ===== [NAME](tclsh.htm#M2) tclsh — Simple shell containing Tcl interpreter [SYNOPSIS](tclsh.htm#M3) [DESCRIPTION](tclsh.htm#M4) [SCRIPT FILES](tclsh.htm#M5) [VARIABLES](tclsh.htm#M6) [**argc**](tclsh.htm#M7) [**argv**](tclsh.htm#M8) [**argv0**](tclsh.htm#M9) [**tcl\_interactive**](tclsh.htm#M10) [PROMPTS](tclsh.htm#M11) [STANDARD CHANNELS](tclsh.htm#M12) [SEE ALSO](tclsh.htm#M13) [KEYWORDS](tclsh.htm#M14) Name ---- tclsh — Simple shell containing Tcl interpreter Synopsis -------- **tclsh** ?**-encoding** *name*? ?*fileName arg arg ...*? Description ----------- **Tclsh** is a shell-like application that reads Tcl commands from its standard input or from a file and evaluates them. If invoked with no arguments then it runs interactively, reading Tcl commands from standard input and printing command results and error messages to standard output. It runs until the **[exit](../tclcmd/exit.htm)** command is invoked or until it reaches end-of-file on its standard input. If there exists a file **.tclshrc** (or **tclshrc.tcl** on the Windows platforms) in the home directory of the user, interactive **tclsh** evaluates the file as a Tcl script just before reading the first command from standard input. Script files ------------ If **tclsh** is invoked with arguments then the first few arguments specify the name of a script file, and, optionally, the encoding of the text data stored in that script file. Any additional arguments are made available to the script as variables (see below). Instead of reading commands from standard input **tclsh** will read Tcl commands from the named file; **tclsh** will exit when it reaches the end of the file. The end of the file may be marked either by the physical end of the medium, or by the character, “\032” (“\u001a”, control-Z). If this character is present in the file, the **tclsh** application will read text up to but not including the character. An application that requires this character in the file may safely encode it as “\032”, “\x1a”, or “\u001a”; or may generate it by use of commands such as **[format](../tclcmd/format.htm)** or **[binary](../tclcmd/binary.htm)**. There is no automatic evaluation of **.tclshrc** when the name of a script file is presented on the **tclsh** command line, but the script file can always **[source](../tclcmd/source.htm)** it if desired. If you create a Tcl script in a file whose first line is ``` **#!/usr/local/bin/tclsh** ``` then you can invoke the script file directly from your shell if you mark the file as executable. This assumes that **tclsh** has been installed in the default location in /usr/local/bin; if it is installed somewhere else then you will have to modify the above line to match. Many UNIX systems do not allow the **#!** line to exceed about 30 characters in length, so be sure that the **tclsh** executable can be accessed with a short file name. An even better approach is to start your script files with the following three lines: ``` **#!/bin/sh # the next line restarts using tclsh \ exec tclsh "$0" ${1+"$@"}** ``` This approach has three advantages over the approach in the previous paragraph. First, the location of the **tclsh** binary does not have to be hard-wired into the script: it can be anywhere in your shell search path. Second, it gets around the 30-character file name limit in the previous approach. Third, this approach will work even if **tclsh** is itself a shell script (this is done on some systems in order to handle multiple architectures or operating systems: the **tclsh** script selects one of several binaries to run). The three lines cause both **sh** and **tclsh** to process the script, but the **[exec](../tclcmd/exec.htm)** is only executed by **sh**. **sh** processes the script first; it treats the second line as a comment and executes the third line. The **[exec](../tclcmd/exec.htm)** statement cause the shell to stop processing and instead to start up **tclsh** to reprocess the entire script. When **tclsh** starts up, it treats all three lines as comments, since the backslash at the end of the second line causes the third line to be treated as part of the comment on the second line. You should note that it is also common practice to install tclsh with its version number as part of the name. This has the advantage of allowing multiple versions of Tcl to exist on the same system at once, but also the disadvantage of making it harder to write scripts that start up uniformly across different versions of Tcl. Variables --------- **Tclsh** sets the following global Tcl variables in addition to those created by the Tcl library itself (such as **[env](../tclcmd/tclvars.htm)**, which maps environment variables such as **PATH** into Tcl): **argc** Contains a count of the number of *arg* arguments (0 if none), not including the name of the script file. **argv** Contains a Tcl list whose elements are the *arg* arguments, in order, or an empty string if there are no *arg* arguments. **argv0** Contains *fileName* if it was specified. Otherwise, contains the name by which **tclsh** was invoked. **tcl\_interactive** Contains 1 if **tclsh** is running interactively (no *fileName* was specified and standard input is a terminal-like device), 0 otherwise. Prompts ------- When **tclsh** is invoked interactively it normally prompts for each command with “**%** ”. You can change the prompt by setting the global variables **tcl\_prompt1** and **tcl\_prompt2**. If variable **tcl\_prompt1** exists then it must consist of a Tcl script to output a prompt; instead of outputting a prompt **tclsh** will evaluate the script in **tcl\_prompt1**. The variable **tcl\_prompt2** is used in a similar way when a newline is typed but the current command is not yet complete; if **tcl\_prompt2** is not set then no prompt is output for incomplete commands. Standard channels ----------------- See **[Tcl\_StandardChannels](https://www.tcl.tk/man/tcl/TclLib/StdChannels.htm)** for more explanations. See also -------- **[auto\_path](../tclcmd/tclvars.htm)**, **[encoding](../tclcmd/encoding.htm)**, **[env](../tclcmd/tclvars.htm)**, **[fconfigure](../tclcmd/fconfigure.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/UserCmd/tclsh.htm> tcl_tk ItclCmd ItclCmd ======= | | | | | | | | --- | --- | --- | --- | --- | --- | | [itcl](itcl.htm "Object-oriented extensions to Tcl") | [itcl::code](code.htm "Capture the namespace context for a code fragment") | [itcl::delegation](itcldelegate.htm "Delegate methods, procs or options to other objects") | [itcl::extendedclass](itclextendedclass.htm "Create a extendedclass of objects") | [itcl::local](local.htm "Create an object local to a procedure") | [itcl::widget](itclwidget.htm "Create a widget class of objects") | | [itcl::body](body.htm "Change the body for a class method/proc") | [itcl::component](itclcomponent.htm "Define components for extendedclass, widget or widgetadaptor") | [itcl::delete](delete.htm "Delete things in the interpreter") | [itcl::find](find.htm "Search for classes and objects") | [itcl::option](itcloption.htm "Define options for extendedclass, widget or widgetadaptor") | [itclvars](itclvars.htm "Variables used by [incr Tcl]") | | [itcl::class](class.htm "Create a class of objects") | [itcl::configbody](configbody.htm "Change the \"config\" code for a public variable") | [itcl::ensemble](ensemble.htm "Create or modify a composite command") | [itcl::is](is.htm "Test argument to see if it is a class or an object") | [itcl::scope](scope.htm "Capture the namespace context for a variable") | Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/ItclCmd/contents.htm> tcl_tk itclvars itclvars ======== Name ---- itclvars — variables used by [incr Tcl] Description ----------- The following global variables are created and managed automatically by the **[incr Tcl]** library. Except where noted below, these variables should normally be treated as read-only by application-specific code and by users. **itcl::library** When an interpreter is created, **[incr Tcl]** initializes this variable to hold the name of a directory containing the system library of **[incr Tcl]** scripts. The initial value of **itcl::library** is set from the ITCL\_LIBRARY environment variable if it exists, or from a compiled-in value otherwise. **itcl::patchLevel** When an interpreter is created, **[incr Tcl]** initializes this variable to hold the current patch level for **[incr Tcl]**. For example, the value "**2.0p1**" indicates **[incr Tcl]** version 2.0 with the first set of patches applied. **itcl::purist** When an interpreter is created containing Tcl/Tk and the **[incr Tcl]** namespace facility, this variable controls a "backward-compatibility" mode for widget access. In vanilla Tcl/Tk, there is a single pool of commands, so the access command for a widget is the same as the window name. When a widget is created within a namespace, however, its access command is installed in that namespace, and should be accessed outside of the namespace using a qualified name. For example, ``` namespace foo { namespace bar { button .b -text "Testing" } } foo::bar::.b configure -background red pack .b ``` Note that the window name ".b" is still used in conjunction with commands like **[pack](../tkcmd/pack.htm)** and **[destroy](../tkcmd/destroy.htm)**. However, the access command for the widget (i.e., name that appears as the *first* argument on a command line) must be more specific. The "**[winfo command](../tkcmd/winfo.htm)**" command can be used to query the fully-qualified access command for any widget, so one can write: ``` [winfo command .b] configure -background red ``` and this is good practice when writing library procedures. Also, in conjunction with the **[bind](../tkcmd/bind.htm)** command, the "%q" field can be used in place of "%W" as the access command: ``` bind Button <Key-Return> {%q flash; %q invoke} ``` While this behavior makes sense from the standpoint of encapsulation, it causes problems with existing Tcl/Tk applications. Many existing applications are written with bindings that use "%W". Many library procedures assume that the window name is the access command. The **itcl::purist** variable controls a backward-compatibility mode. By default, this variable is "0", and the window name can be used as an access command in any context. Whenever the **[unknown](../tclcmd/unknown.htm)** procedure stumbles across a widget name, it simply uses "**[winfo command](../tkcmd/winfo.htm)**" to determine the appropriate command name. If this variable is set to "1", this backward-compatibility mode is disabled. This gives better encapsulation, but using the window name as the access command may lead to "invalid command" errors. **itcl::version** When an interpreter is created, **[incr Tcl]** initializes this variable to hold the version number of the form *x.y*. Changes to *x* represent major changes with probable incompatibilities and changes to *y* represent small enhancements and bug fixes that retain backward compatibility. Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/ItclCmd/itclvars.htm> tcl_tk delete delete ====== Name ---- itcl::delete — delete things in the interpreter Synopsis -------- **[itcl::delete](delete.htm)** *option* ?*arg arg ...*? Description ----------- The **delete** command is used to delete things in the interpreter. It is implemented as an ensemble, so extensions can add their own options and extend the behavior of this command. By default, the **delete** command handles the destruction of namespaces. The *option* argument determines what action is carried out by the command. The legal *options* (which may be abbreviated) are: **delete class** *name* ?*name...*? Deletes one or more **[incr Tcl]** classes called *name*. This deletes all objects in the class, and all derived classes as well. If an error is encountered while destructing an object, it will prevent the destruction of the class and any remaining objects. To destroy the entire class without regard for errors, use the "**delete namespace**" command. **delete object** *name* ?*name...*? Deletes one or more **[incr Tcl]** objects called *name*. An object is deleted by invoking all destructors in its class hierarchy, in order from most- to least-specific. If all destructors are successful, data associated with the object is deleted and the *name* is removed as a command from the interpreter. If the access command for an object resides in another namespace, then its qualified name can be used: ``` itcl::delete object foo::bar::x ``` If an error is encountered while destructing an object, the **delete** command is aborted and the object remains alive. To destroy an object without regard for errors, use the "**[rename](../tclcmd/rename.htm)**" command to destroy the object access command. **delete namespace** *name* ?*name...*? Deletes one or more namespaces called *name*. This deletes all commands and variables in the namespace, and deletes all child namespaces as well. When a namespace is deleted, it is automatically removed from the import lists of all other namespaces. Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/ItclCmd/delete.htm> tcl_tk itclwidget itclwidget ========== [NAME](itclwidget.htm#M2) itcl::widget — create a widget class of objects [WARNING!](itclwidget.htm#M3) [SYNOPSIS](itclwidget.htm#M4) [DESCRIPTION](itclwidget.htm#M5) [WIDGET DEFINITIONS](itclwidget.htm#M6) [**widget** *widgetName definition*](itclwidget.htm#M7) [**inherit** *baseWidget* ?*baseWidget*...?](itclwidget.htm#M8) [**constructor** *args* ?*init*? *body*](itclwidget.htm#M9) [**destructor** *body*](itclwidget.htm#M10) [**method** *name* ?*args*? ?*body*?](itclwidget.htm#M11) [**proc** *name* ?*args*? ?*body*?](itclwidget.htm#M12) [**variable** *varName* ?*init*? ?*config*?](itclwidget.htm#M13) [**common** *varName* ?*init*?](itclwidget.htm#M14) [**public** *command* ?*arg arg ...*?](itclwidget.htm#M15) [**protected** *command* ?*arg arg ...*?](itclwidget.htm#M16) [**private** *command* ?*arg arg ...*?](itclwidget.htm#M17) [WIDGET USAGE](itclwidget.htm#M18) [*widgetName objName* ?*args...*?](itclwidget.htm#M19) [OBJECT USAGE](itclwidget.htm#M20) [*objName method* ?*args...*?](itclwidget.htm#M21) [BUILT-IN METHODS](itclwidget.htm#M22) [*objName* **cget option**](itclwidget.htm#M23) [*objName* **configure** ?*option*? ?*value option value ...*?](itclwidget.htm#M24) [*objName* **isa** *widgetName*](itclwidget.htm#M25) [*objName* **info** *option* ?*args...*?](itclwidget.htm#M26) [*objName* **info widget**](itclwidget.htm#M27) [*objName* **info inherit**](itclwidget.htm#M28) [*objName* **info heritage**](itclwidget.htm#M29) [*objName* **info function** ?*cmdName*? ?**-protection**? ?**-type**? ?**-name**? ?**-args**? ?**-body**?](itclwidget.htm#M30) [*objName* **info variable** ?*varName*? ?**-protection**? ?**-type**? ?**-name**? ?**-init**? ?**-value**? ?**-config**?](itclwidget.htm#M31) [CHAINING METHODS/PROCS](itclwidget.htm#M32) [AUTO-LOADING](itclwidget.htm#M33) [C PROCEDURES](itclwidget.htm#M34) [KEYWORDS](itclwidget.htm#M35) Name ---- itcl::widget — create a widget class of objects Warning! -------- This is new functionality in [incr Tcl] where the API can still change!! Synopsis -------- **itcl::widget** *widgetName* **{** **inherit** *baseWidget* ?*baseWidget*...? **constructor** *args* ?*init*? *body* **destructor** *body* **public method** *name* ?*args*? ?*body*? **protected method** *name* ?*args*? ?*body*? **private method** *name* ?*args*? ?*body*? **public proc** *name* ?*args*? ?*body*? **protected proc** *name* ?*args*? ?*body*? **private proc** *name* ?*args*? ?*body*? **public variable** *varName* ?*init*? ?*config*? **protected variable** *varName* ?*init*? ?*config*? **private variable** *varName* ?*init*? ?*config*? **public common** *varName* ?*init*? **protected common** *varName* ?*init*? **private common** *varName* ?*init*? **public** *command* ?*arg arg ...*? **protected** *command* ?*arg arg ...*? **private** *command* ?*arg arg ...*? **<delegation info>** see delegation page **<option info>** see option page **set** *varName* ?*value*? **[array](../tclcmd/array.htm)** *option* ?*arg arg ...*? **}** *widgetName objName* ?*arg arg ...*? *objName method* ?*arg arg ...*? *widgetName::proc* ?*arg arg ...*? Description ----------- One of the fundamental constructs in **[incr Tcl]** is the widget definition. A widget is like a class with some additional features. Each widget acts as a template for actual objects that can be created. The widget itself is a namespace which contains things common to all objects. Each object has its own unique bundle of data which contains instances of the "variables" defined in the widget definition. Each object also has a built-in variable named "this", which contains the name of the object. Widgets can also have "common" data members that are shared by all objects in a widget. Two types of functions can be included in the widget definition. "Methods" are functions which operate on a specific object, and therefore have access to both "variables" and "common" data members. "Procs" are ordinary procedures in the widget namespace, and only have access to "common" data members. If the body of any method or proc starts with "**@**", it is treated as the symbolic name for a C procedure. Otherwise, it is treated as a Tcl code script. See below for details on registering and using C procedures. A widget can only be defined once, although the bodies of widget methods and procs can be defined again and again for interactive debugging. See the **body** and **configbody** commands for details. Each namespace can have its own collection of objects and widgets. The list of widgets available in the current context can be queried using the "**[itcl::find widgets](find.htm)**" command, and the list of objects, with the "**[itcl::find objects](find.htm)**" command. A widget can be deleted using the "**delete widget**" command. Individual objects can be deleted using the "**delete object**" command. Widget definitions ------------------ **widget** *widgetName definition* Provides the definition for a widget named *widgetName*. If the widget *widgetName* already exists, or if a command called *widgetName* exists in the current namespace context, this command returns an error. If the widget definition is successfully parsed, *widgetName* becomes a command in the current context, handling the creation of objects for this widget. The widget *definition* is evaluated as a series of Tcl statements that define elements within the widget. The following widget definition commands are recognized: **inherit** *baseWidget* ?*baseWidget*...? Causes the current widget to inherit characteristics from one or more base widgets. Widgets must have been defined by a previous **widget** command, or must be available to the auto-loading facility (see "AUTO-LOADING" below). A single widget definition can contain no more than one **inherit** command. The order of *baseWidget* names in the **inherit** list affects the name resolution for widget members. When the same member name appears in two or more base widgets, the base widget that appears first in the **inherit** list takes precedence. For example, if widgets "Foo" and "Bar" both contain the member "x", and if another widget has the "**inherit**" statement: ``` inherit Foo Bar ``` then the name "x" means "Foo::x". Other inherited members named "x" must be referenced with their explicit name, like "Bar::x". **constructor** *args* ?*init*? *body* Declares the *args* argument list and *body* used for the constructor, which is automatically invoked whenever an object is created. Before the *body* is executed, the optional *init* statement is used to invoke any base widget constructors that require arguments. Variables in the *args* specification can be accessed in the *init* code fragment, and passed to base widget constructors. After evaluating the *init* statement, any base widget constructors that have not been executed are invoked automatically without arguments. This ensures that all base widgets are fully constructed before the constructor *body* is executed. By default, this scheme causes constructors to be invoked in order from least- to most-specific. This is exactly the opposite of the order that widgets are reported by the **[info heritage](../tclcmd/info.htm)** command. If construction is successful, the constructor always returns the object name-regardless of how the *body* is defined-and the object name becomes a command in the current namespace context. If construction fails, an error message is returned. **destructor** *body* Declares the *body* used for the destructor, which is automatically invoked when an object is deleted. If the destructor is successful, the object data is destroyed and the object name is removed as a command from the interpreter. If destruction fails, an error message is returned and the object remains. When an object is destroyed, all destructors in its widget hierarchy are invoked in order from most- to least-specific. This is the order that the widgets are reported by the "**[info heritage](../tclcmd/info.htm)**" command, and it is exactly the opposite of the default constructor order. **method** *name* ?*args*? ?*body*? Declares a method called *name*. When the method *body* is executed, it will have automatic access to object-specific variables and common data members. If the *args* list is specified, it establishes the usage information for this method. The **body** command can be used to redefine the method body, but the *args* list must match this specification. Within the body of another widget method, a method can be invoked like any other command-simply by using its name. Outside of the widget context, the method name must be prefaced an object name, which provides the context for the data that it manipulates. Methods in a base widget that are redefined in the current widget, or hidden by another base widget, can be qualified using the "*widgetName*::*method*" syntax. **proc** *name* ?*args*? ?*body*? Declares a proc called *name*. A proc is an ordinary procedure within the widget namespace. Unlike a method, a proc is invoked without referring to a specific object. When the proc *body* is executed, it will have automatic access only to common data members. If the *args* list is specified, it establishes the usage information for this proc. The **body** command can be used to redefine the proc body, but the *args* list must match this specification. Within the body of another widget method or proc, a proc can be invoked like any other command-simply by using its name. In any other namespace context, the proc is invoked using a qualified name like "*widgetName***::***proc*". Procs in a base widget that are redefined in the current widget, or hidden by another base widget, can also be accessed via their qualified name. **variable** *varName* ?*init*? ?*config*? Defines an object-specific variable named *varName*. All object-specific variables are automatically available in widget methods. They need not be declared with anything like the **[global](../tclcmd/global.htm)** command. If the optional *init* string is specified, it is used as the initial value of the variable when a new object is created. Initialization forces the variable to be a simple scalar value; uninitialized variables, on the other hand, can be set within the constructor and used as arrays. The optional *config* script is only allowed for public variables. If specified, this code fragment is executed whenever a public variable is modified by the built-in "configure" method. The *config* script can also be specified outside of the widget definition using the **configbody** command. **common** *varName* ?*init*? Declares a common variable named *varName*. Common variables reside in the widget namespace and are shared by all objects belonging to the widget. They are just like global variables, except that they need not be declared with the usual **[global](../tclcmd/global.htm)** command. They are automatically visible in all widget methods and procs. If the optional *init* string is specified, it is used as the initial value of the variable. Initialization forces the variable to be a simple scalar value; uninitialized variables, on the other hand, can be set with subsequent **[set](../tclcmd/set.htm)** and **[array](../tclcmd/array.htm)** commands and used as arrays. Once a common data member has been defined, it can be set using **[set](../tclcmd/set.htm)** and **[array](../tclcmd/array.htm)** commands within the widget definition. This allows common data members to be initialized as arrays. For example: ``` itcl::widget Foo { protected common boolean set boolean(true) 1 set boolean(false) 0 } ``` Note that if common data members are initialized within the constructor, they get initialized again and again whenever new objects are created. **public** *command* ?*arg arg ...*? **protected** *command* ?*arg arg ...*? **private** *command* ?*arg arg ...*? These commands are used to set the protection level for widget members that are created when *command* is evaluated. The *command* is usually **method**, **[proc](../tclcmd/proc.htm)**, **[variable](../tclcmd/variable.htm)** or**common**, and the remaining *arg*'s complete the member definition. However, *command* can also be a script containing many different member definitions, and the protection level will apply to all of the members that are created. Widget usage ------------ Once a widget has been defined, the widget name can be used as a command to create new objects belonging to the widget. *widgetName objName* ?*args...*? Creates a new object in widget *widgetName* with the name *objName*. Remaining arguments are passed to the constructor of the most-specific widget. This in turn passes arguments to base widget constructors before invoking its own body of commands. If construction is successful, a command called *objName* is created in the current namespace context, and *objName* is returned as the result of this operation. If an error is encountered during construction, the destructors are automatically invoked to free any resources that have been allocated, the object is deleted, and an error is returned. If *objName* contains the string "**#auto**", that string is replaced with an automatically generated name. Names have the form *widgetName<number>*, where the *widgetName* part is modified to start with a lowercase letter. In widget "Toaster", for example, the "**#auto**" specification would produce names like toaster0, toaster1, etc. Note that "**#auto**" can be also be buried within an object name: ``` fileselectiondialog .foo.bar.#auto -background red ``` This would generate an object named ".foo.bar.fileselectiondialog0". Object usage ------------ Once an object has been created, the object name can be used as a command to invoke methods that operate on the object. *objName method* ?*args...*? Invokes a method named *method* on an object named *objName*. Remaining arguments are passed to the argument list for the method. The method name can be "constructor", "destructor", any method name appearing in the widget definition, or any of the following built-in methods. Built-in methods ---------------- *objName* **cget option** Provides access to public variables as configuration options. This mimics the behavior of the usual "cget" operation for Tk widgets. The *option* argument is a string of the form "**-***varName*", and this method returns the current value of the public variable *varName*. *objName* **configure** ?*option*? ?*value option value ...*? Provides access to public variables as configuration options. This mimics the behavior of the usual "configure" operation for Tk widgets. With no arguments, this method returns a list of lists describing all of the public variables. Each list has three elements: the variable name, its initial value and its current value. If a single *option* of the form "**-***varName*" is specified, then this method returns the information for that one variable. Otherwise, the arguments are treated as *option*/*value* pairs assigning new values to public variables. Each variable is assigned its new value, and if it has any "config" code associated with it, it is executed in the context of the widget where it was defined. If the "config" code generates an error, the variable is set back to its previous value, and the **configure** method returns an error. *objName* **isa** *widgetName* Returns non-zero if the given *widgetName* can be found in the object's heritage, and zero otherwise. *objName* **info** *option* ?*args...*? Returns information related to a particular object named *objName*, or to its widget definition. The *option* parameter includes the following things, as well as the options recognized by the usual Tcl "info" command: *objName* **info widget** Returns the name of the most-specific widget for object *objName*. *objName* **info inherit** Returns the list of base widgets as they were defined in the "**inherit**" command, or an empty string if this widget has no base widgets. *objName* **info heritage** Returns the current widget name and the entire list of base widgets in the order that they are traversed for member lookup and object destruction. *objName* **info function** ?*cmdName*? ?**-protection**? ?**-type**? ?**-name**? ?**-args**? ?**-body**? With no arguments, this command returns a list of all widgets methods and procs. If *cmdName* is specified, it returns information for a specific method or proc. If no flags are specified, this command returns a list with the following elements: the protection level, the type (method/proc), the qualified name, the argument list and the body. Flags can be used to request specific elements from this list. *objName* **info variable** ?*varName*? ?**-protection**? ?**-type**? ?**-name**? ?**-init**? ?**-value**? ?**-config**? With no arguments, this command returns a list of all object-specific variables and common data members. If *varName* is specified, it returns information for a specific data member. If no flags are specified, this command returns a list with the following elements: the protection level, the type (variable/common), the qualified name, the initial value, and the current value. If *varName* is a public variable, the "config" code is included on this list. Flags can be used to request specific elements from this list. Chaining methods/procs ---------------------- Sometimes a base widget has a method or proc that is redefined with the same name in a derived widget. This is a way of making the derived widget handle the same operations as the base widget, but with its own specialized behavior. For example, suppose we have a Toaster widget that looks like this: ``` itcl::widget Toaster { variable crumbs 0 method toast {nslices} { if {$crumbs > 50} { error "== FIRE! FIRE! ==" } set crumbs [expr $crumbs+4*$nslices] } method clean {} { set crumbs 0 } } ``` We might create another widget like SmartToaster that redefines the "toast" method. If we want to access the base widget method, we can qualify it with the base widget name, to avoid ambiguity: ``` itcl::widget SmartToaster { inherit Toaster method toast {nslices} { if {$crumbs > 40} { clean } return [Toaster::toast $nslices] } } ``` Instead of hard-coding the base widget name, we can use the "chain" command like this: ``` itcl::widget SmartToaster { inherit Toaster method toast {nslices} { if {$crumbs > 40} { clean } return [chain $nslices] } } ``` The chain command searches through the widget hierarchy for a slightly more generic (base widget) implementation of a method or proc, and invokes it with the specified arguments. It starts at the current widget context and searches through base widgets in the order that they are reported by the "info heritage" command. If another implementation is not found, this command does nothing and returns the null string. Auto-loading ------------ Widget definitions need not be loaded explicitly; they can be loaded as needed by the usual Tcl auto-loading facility. Each directory containing widget definition files should have an accompanying "tclIndex" file. Each line in this file identifies a Tcl procedure or **[incr Tcl]** widget definition and the file where the definition can be found. For example, suppose a directory contains the definitions for widgets "Toaster" and "SmartToaster". Then the "tclIndex" file for this directory would look like: ``` # Tcl autoload index file, version 2.0 for [incr Tcl] # This file is generated by the "auto_mkindex" command # and sourced to set up indexing information for one or # more commands. Typically each line is a command that # sets an element in the auto_index array, where the # element name is the name of a command and the value is # a script that loads the command. set auto_index(::Toaster) "source $dir/Toaster.itcl" set auto_index(::SmartToaster) "source $dir/SmartToaster.itcl" ``` The **[auto\_mkindex](../tclcmd/library.htm)** command is used to automatically generate "tclIndex" files. The auto-loader must be made aware of this directory by appending the directory name to the "auto\_path" variable. When this is in place, widgets will be auto-loaded as needed when used in an application. C procedures ------------ C procedures can be integrated into an **[incr Tcl]** widget definition to implement methods, procs, and the "config" code for public variables. Any body that starts with "**@**" is treated as the symbolic name for a C procedure. Symbolic names are established by registering procedures via **[Itcl\_RegisterC()](https://www.tcl.tk/man/tcl/ItclLib/RegisterC.htm)**. This is usually done in the **[Tcl\_AppInit()](https://www.tcl.tk/man/tcl/TclLib/AppInit.htm)** procedure, which is automatically called when the interpreter starts up. In the following example, the procedure My\_FooCmd() is registered with the symbolic name "foo". This procedure can be referenced in the **body** command as "@foo". ``` int [Tcl\_AppInit](https://www.tcl.tk/man/tcl/TclLib/AppInit.htm)(interp) [Tcl\_Interp](https://www.tcl.tk/man/tcl/TclLib/Interp.htm) *interp; /* Interpreter for application. */ { if (Itcl_Init(interp) == TCL_ERROR) { return TCL_ERROR; } if ([Itcl\_RegisterC](https://www.tcl.tk/man/tcl/ItclLib/RegisterC.htm)(interp, "foo", My_FooCmd) != TCL_OK) { return TCL_ERROR; } } ``` C procedures are implemented just like ordinary Tcl commands. See the **CrtCommand** man page for details. Within the procedure, widget data members can be accessed like ordinary variables using **[Tcl\_SetVar()](https://www.tcl.tk/man/tcl/TclLib/SetVar.htm)**, **[Tcl\_GetVar()](https://www.tcl.tk/man/tcl/TclLib/SetVar.htm)**, **[Tcl\_TraceVar()](https://www.tcl.tk/man/tcl/TclLib/TraceVar.htm)**, etc. Widget methods and procs can be executed like ordinary commands using **[Tcl\_Eval()](https://www.tcl.tk/man/tcl/TclLib/Eval.htm)**. **[incr Tcl]** makes this possible by automatically setting up the context before executing the C procedure. This scheme provides a natural migration path for code development. Widgets can be developed quickly using Tcl code to implement the bodies. An entire application can be built and tested. When necessary, individual bodies can be implemented with C code to improve performance. Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/ItclCmd/itclwidget.htm>
programming_docs
tcl_tk local local ===== Name ---- itcl::local — create an object local to a procedure Synopsis -------- **itcl::local** *className objName* ?*arg arg ...*? Description ----------- The **local** command creates an **[incr Tcl]** object that is local to the current call frame. When the call frame goes away, the object is automatically deleted. This command is useful for creating objects that are local to a procedure. As a side effect, this command creates a variable named "**itcl-local-***xxx*", where *xxx* is the name of the object that is created. This variable detects when the call frame is destroyed and automatically deletes the associated object. Example ------- In the following example, a simple "counter" object is used within the procedure "test". The counter is created as a local object, so it is automatically deleted each time the procedure exits. The **[puts](../tclcmd/puts.htm)** statements included in the constructor/destructor show the object coming and going as the procedure is called. ``` itcl::class counter { private variable count 0 constructor {} { puts "created: $this" } destructor { puts "deleted: $this" } method bump {{by 1}} { incr count $by } method get {} { return $count } } proc test {val} { local counter x for {set i 0} {$i < $val} {incr i} { x bump } return [x get] } set result [test 5] puts "test: $result" set result [test 10] puts "test: $result" puts "objects: [itcl::find objects *]" ``` Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/ItclCmd/local.htm> tcl_tk body body ==== Name ---- itcl::body — change the body for a class method/proc Synopsis -------- **itcl::body** *className***::***function args body* Description ----------- The **body** command is used outside of an **[incr Tcl]** class definition to define or redefine the body of a class method or proc. This facility allows a class definition to have separate "interface" and "implementation" parts. The "interface" part is a **class** command with declarations for methods, procs, instance variables and common variables. The "implementation" part is a series of **body** and **configbody** commands. If the "implementation" part is kept in a separate file, it can be sourced again and again as bugs are fixed, to support interactive development. When using the "tcl" mode in the **emacs** editor, the "interface" and "implementation" parts can be kept in the same file; as bugs are fixed, individual bodies can be highlighted and sent to the test application. The name "*className***::***function*" identifies the method/proc being changed. If an *args* list was specified when the *function* was defined in the class definition, the *args* list for the **body** command must match in meaning. Variable names can change, but the argument lists must have the same required arguments and the same default values for optional arguments. The special **args** argument acts as a wildcard when included in the *args* list in the class definition; it will match zero or more arguments of any type when the body is redefined. If the *body* string starts with "**@**", it is treated as the symbolic name for a C procedure. The *args* list has little meaning for the C procedure, except to document the expected usage. (The C procedure is not guaranteed to use arguments in this manner.) If *body* does not start with "**@**", it is treated as a Tcl command script. When the function is invoked, command line arguments are matched against the *args* list, and local variables are created to represent each argument. This is the usual behavior for a Tcl-style proc. Symbolic names for C procedures are established by registering procedures via **[Itcl\_RegisterC()](https://www.tcl.tk/man/tcl/ItclLib/RegisterC.htm)**. This is usually done in the **[Tcl\_AppInit()](https://www.tcl.tk/man/tcl/TclLib/AppInit.htm)** procedure, which is automatically called when the interpreter starts up. In the following example, the procedure My\_FooCmd() is registered with the symbolic name "foo". This procedure can be referenced in the **body** command as "@foo". ``` int [Tcl\_AppInit](https://www.tcl.tk/man/tcl/TclLib/AppInit.htm)(interp) [Tcl\_Interp](https://www.tcl.tk/man/tcl/TclLib/Interp.htm) *interp; /* Interpreter for application. */ { if (Itcl_Init(interp) == TCL_ERROR) { return TCL_ERROR; } if ([Itcl\_RegisterC](https://www.tcl.tk/man/tcl/ItclLib/RegisterC.htm)(interp, "foo", My_FooCmd) != TCL_OK) { return TCL_ERROR; } } ``` Example ------- In the following example, a "File" class is defined to represent open files. The method bodies are included below the class definition via the **body** command. Note that the bodies of the constructor/destructor must be included in the class definition, but they can be redefined via the **body** command as well. ``` itcl::class File { private variable fid "" constructor {name access} { set fid [open $name $access] } destructor { close $fid } method get {} method put {line} method eof {} } itcl::body File::get {} { return [gets $fid] } itcl::body File::put {line} { puts $fid $line } itcl::body File::eof {} { return [::eof $fid] } # # See the File class in action: # File x /etc/passwd "r" while {![x eof]} { puts "=> [x get]" } itcl::delete object x ``` Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/ItclCmd/body.htm> tcl_tk configbody configbody ========== Name ---- itcl::configbody — change the "config" code for a public variable Synopsis -------- **itcl::configbody** *className***::***varName body* Description ----------- The **configbody** command is used outside of an **[incr Tcl]** class definition to define or redefine the configuration code associated with a public variable. Public variables act like configuration options for an object. They can be modified outside the class scope using the built-in **configure** method. Each variable can have a bit of "config" code associate with it that is automatically executed when the variable is configured. The **configbody** command can be used to define or redefine this body of code. Like the **body** command, this facility allows a class definition to have separate "interface" and "implementation" parts. The "interface" part is a **class** command with declarations for methods, procs, instance variables and common variables. The "implementation" part is a series of **body** and **configbody** commands. If the "implementation" part is kept in a separate file, it can be sourced again and again as bugs are fixed, to support interactive development. When using the "tcl" mode in the **emacs** editor, the "interface" and "implementation" parts can be kept in the same file; as bugs are fixed, individual bodies can be highlighted and sent to the test application. The name "*className***::***varName*" identifies the public variable being updated. If the *body* string starts with "**@**", it is treated as the symbolic name for a C procedure. Otherwise, it is treated as a Tcl command script. Symbolic names for C procedures are established by registering procedures via **[Itcl\_RegisterC()](https://www.tcl.tk/man/tcl/ItclLib/RegisterC.htm)**. This is usually done in the **[Tcl\_AppInit()](https://www.tcl.tk/man/tcl/TclLib/AppInit.htm)** procedure, which is automatically called when the interpreter starts up. In the following example, the procedure My\_FooCmd() is registered with the symbolic name "foo". This procedure can be referenced in the **configbody** command as "@foo". ``` int [Tcl\_AppInit](https://www.tcl.tk/man/tcl/TclLib/AppInit.htm)(interp) [Tcl\_Interp](https://www.tcl.tk/man/tcl/TclLib/Interp.htm) *interp; /* Interpreter for application. */ { if (Itcl_Init(interp) == TCL_ERROR) { return TCL_ERROR; } if ([Itcl\_RegisterC](https://www.tcl.tk/man/tcl/ItclLib/RegisterC.htm)(interp, "foo", My_FooCmd) != TCL_OK) { return TCL_ERROR; } } ``` Example ------- In the following example, a "File" class is defined to represent open files. Whenever the "-name" option is configured, the existing file is closed, and a new file is opened. Note that the "config" code for a public variable is optional. The "-access" option, for example, does not have it. ``` itcl::class File { private variable fid "" public variable name "" public variable access "r" constructor {args} { eval configure $args } destructor { if {$fid != ""} { close $fid } } method get {} method put {line} method eof {} } itcl::body File::get {} { return [gets $fid] } itcl::body File::put {line} { puts $fid $line } itcl::body File::eof {} { return [::eof $fid] } itcl::configbody File::name { if {$fid != ""} { close $fid } set fid [open $name $access] } # # See the File class in action: # File x x configure -name /etc/passwd while {![x eof]} { puts "=> [x get]" } itcl::delete object x ``` Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/ItclCmd/configbody.htm> tcl_tk itcloption itcloption ========== [NAME](itcloption.htm#M2) itcl::option — define options for extendedclass, widget or widgetadaptor [WARNING!](itcloption.htm#M3) [SYNOPSIS](itcloption.htm#M4) [DESCRIPTION](itcloption.htm#M5) [**-default** *defvalue*](itcloption.htm#M6) [**-readonly**](itcloption.htm#M7) [**-cgetmethod** *methodName*](itcloption.htm#M8) [**-cgetmethodvar** *varName*](itcloption.htm#M9) [**-configuremethod** *methodName*](itcloption.htm#M10) [**-configuremethodvar** *varName*](itcloption.htm#M11) [**-validatemethod** *methodName*](itcloption.htm#M12) [**-validatemethodvar** *varName*](itcloption.htm#M13) [KEYWORDS](itcloption.htm#M14) Name ---- itcl::option — define options for extendedclass, widget or widgetadaptor Parts of this description are "borrowed" from Tcl extension [snit], as the functionality is mostly identical. Warning! -------- This is new functionality in [incr Tcl] where the API can still change!! Synopsis -------- **[option](../tkcmd/option.htm)** *optionSpec* ?*defaultValue*? **[option](../tkcmd/option.htm)** *optionSpec* ?*options*? Description ----------- The **[option](../tkcmd/option.htm)** command is used inside an **[incr Tcl]** extendedclass/widget/widgetadaptor definition to define options. In the first form defines an option for instances of this type, and optionally gives it an initial value. The initial value defaults to the empty string if no defaultValue is specified. An option defined in this way is said to be locally defined. The optionSpec is a list defining the option's name, resource name, and class name, e.g.: ``` option {-font font Font} {Courier 12} ``` The option name must begin with a hyphen, and must not contain any upper case letters. The resource name and class name are optional; if not specified, the resource name defaults to the option name, minus the hyphen, and the class name defaults to the resource name with the first letter capitalized. Thus, the following statement is equivalent to the previous example: ``` option -font {Courier 12} ``` See The Tk Option Database for more information about resource and class names. Options are normally set and retrieved using the standard instance methods configure and cget; within instance code (method bodies, etc.), option values are available through the options array: ``` set myfont $itcl_options(-font) ``` In the second form you can define option handlers (e.g., -configuremethod), then it should probably use configure and cget to access its options to avoid subtle errors. The option statement may include the following options: **-default** *defvalue* Defines the option's default value; the option's default value will be "" otherwise. **-readonly** The option is handled read-only -- it can only be set using configure at creation time, i.e., in the type's constructor. **-cgetmethod** *methodName* Every locally-defined option may define a -cgetmethod; it is called when the option's value is retrieved using the cget method. Whatever the method's body returns will be the return value of the call to cget. The named method must take one argument, the option name. For example, this code is equivalent to (though slower than) Itcl's default handling of cget: ``` option -font -cgetmethod GetOption method GetOption {option} { return $itcl_options($option) } ``` Note that it's possible for any number of options to share a -cgetmethod. **-cgetmethodvar** *varName* That is very similar to -cgetmethod, the only difference is, one can define a variable, where to find the cgetmethod during runtime. **-configuremethod** *methodName* Every locally-defined option may define a -configuremethod; it is called when the option's value is set using the configure or configurelist methods. It is the named method's responsibility to save the option's value; in other words, the value will not be saved to the itcl\_options() array unless the method saves it there. The named method must take two arguments, the option name and its new value. For example, this code is equivalent to (though slower than) Itcl's default handling of configure: ``` option -font -configuremethod SetOption method SetOption {option value} { set itcl_options($option) $value } ``` Note that it's possible for any number of options to share a single -configuremethod. **-configuremethodvar** *varName* That is very similar to -configuremethod, the only difference is, one can define a variable, where to find the configuremethod during runtime. **-validatemethod** *methodName* Every locally-defined option may define a -validatemethod; it is called when the option's value is set using the configure or configurelist methods, just before the -configuremethod (if any). It is the named method's responsibility to validate the option's new value, and to throw an error if the value is invalid. The named method must take two arguments, the option name and its new value. For example, this code verifies that -flag's value is a valid Boolean value: ``` option -font -validatemethod CheckBoolean method CheckBoolean {option value} { if {![string is boolean -strict $value]} { error "option $option must have a boolean value." } } ``` Note that it's possible for any number of options to share a single -validatemethod. **-validatemethodvar** *varName* That is very similar to -validatemethod, the only difference is, one can define a variable, where to find the validatemethod during runtime. Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/ItclCmd/itcloption.htm> tcl_tk code code ==== Name ---- itcl::code — capture the namespace context for a code fragment Synopsis -------- **itcl::code** ?**-namespace** *name*? *command* ?*arg arg ...*? Description ----------- Creates a scoped value for the specified *command* and its associated *arg* arguments. A scoped value is a list with three elements: the "@scope" keyword, a namespace context, and a value string. For example, the command ``` namespace foo { code puts "Hello World!" } ``` produces the scoped value: ``` @scope ::foo {puts {Hello World!}} ``` Note that the **code** command captures the current namespace context. If the **-namespace** flag is specified, then the current context is ignored, and the *name* string is used as the namespace context. Extensions like Tk execute ordinary code fragments in the global namespace. A scoped value captures a code fragment together with its namespace context in a way that allows it to be executed properly later. It is needed, for example, to wrap up code fragments when a Tk widget is used within a namespace: ``` namespace foo { private proc report {mesg} { puts "click: $mesg" } button .b1 -text "Push Me" -command [code report "Hello World!"] pack .b1 } ``` The code fragment associated with button .b1 only makes sense in the context of namespace "foo". Furthermore, the "report" procedure is private, and can only be accessed within that namespace. The **code** command wraps up the code fragment in a way that allows it to be executed properly when the button is pressed. Also, note that the **code** command preserves the integrity of arguments on the command line. This makes it a natural replacement for the **[list](../tclcmd/list.htm)** command, which is often used to format Tcl code fragments. In other words, instead of using the **[list](../tclcmd/list.htm)** command like this: ``` after 1000 [list puts "Hello $name!"] ``` use the **code** command like this: ``` after 1000 [code puts "Hello $name!"] ``` This not only formats the command correctly, but also captures its namespace context. Scoped commands can be invoked like ordinary code fragments, with or without the **[eval](../tclcmd/eval.htm)** command. For example, the following statements work properly: ``` set cmd {@scope ::foo .b1} $cmd configure -background red set opts {-bg blue -fg white} eval $cmd configure $opts ``` Note that scoped commands by-pass the usual protection mechanisms; the command: ``` @scope ::foo {report {Hello World!}} ``` can be used to access the "foo::report" proc from any namespace context, even though it is private. Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/ItclCmd/code.htm> tcl_tk itclcomponent itclcomponent ============= Name ---- itcl::component — define components for extendedclass, widget or widgetadaptor Parts of this description are "borrowed" from Tcl extension [snit], as the functionality is mostly identical. Warning! -------- This is new functionality in [incr Tcl] where the API can still change!! Synopsis -------- **public component** *comp* ?**-inherit**? **protected component** *comp* ?**-inherit**? **private component** *comp* ?**-inherit**? Description ----------- The **component** command is used inside an **[incr Tcl]** extendedclass/widget/widgetadaptor definition to define components. Explicitly declares a component called comp, and automatically defines the component's instance variable. If the *-inherit* option is specified then all unknown methods and options will be delegated to this component. The name -inherit implies that instances of this new type inherit, in a sense, the methods and options of the component. That is, -inherit yes is equivalent to: ``` component mycomp delegate option * to mycomp delegate method * to mycomp ``` Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/ItclCmd/itclcomponent.htm> tcl_tk itcldelegate itcldelegate ============ [NAME](itcldelegate.htm#M2) itcl::delegation — delegate methods, procs or options to other objects [WARNING!](itcldelegate.htm#M3) [SYNOPSIS](itcldelegate.htm#M4) [DESCRIPTION](itcldelegate.htm#M5) [**delegate method** *methodName* **to** *componentName* ?**as** *targetName*?](itcldelegate.htm#M6) [**delegate method** *methodName* ?**to** *componentName*? **using** *pattern*](itcldelegate.htm#M7) [**%%**](itcldelegate.htm#M8) [**%c**](itcldelegate.htm#M9) [**%j**](itcldelegate.htm#M10) [**%m**](itcldelegate.htm#M11) [**%M**](itcldelegate.htm#M12) [**%n**](itcldelegate.htm#M13) [**%s**](itcldelegate.htm#M14) [**%t**](itcldelegate.htm#M15) [**%w**](itcldelegate.htm#M16) [**delegate method** *\** ?**to** *componentName*? ?**using** *pattern*? ?**except** *methodName methodName ...*?](itcldelegate.htm#M17) [**delegate option** *namespec* **to** *comp*](itcldelegate.htm#M18) [**delegate option namespec to comp as target**](itcldelegate.htm#M19) [**delegate option \* to** *comp*](itcldelegate.htm#M20) [**delegate option \* to** *comp* **except** *exceptions*](itcldelegate.htm#M21) [**delegate** *option* **\*** ?**except** *optionName optionName ...*?](itcldelegate.htm#M22) [KEYWORDS](itcldelegate.htm#M23) Name ---- itcl::delegation — delegate methods, procs or options to other objects Parts of this description are "borrowed" from Tcl extension [snit], as the functionality is mostly identical. Warning! -------- This is new functionality in [incr Tcl] where the API can still change!! Synopsis -------- **delegate method** *methodName* **to** *componentName* ?**as** *targetName*? **delegate method** *methodName* ?**to** *componentName*? **using** *pattern* **delegate method** *\* ?***to** *componentName*? ?**using** *pattern*? ?**except** *methodName methodName ...*? **delegate proc** *procName* **to** *componentName* ?**as** *targetName*? **delegate proc** *procName* ?**to** *componentName*? **using** *pattern* **delegate proc** *\** ?**to** *componentName*? ?**using** *pattern*? ?**except** *procName procName ...*? **delegate option** *optionSpec* **to** *componentName* **delegate option** *optionSpec* **to** *componentName* **as** *targetname*? **delegate option** *\** **to** *componentName* **delegate option** *\** **to** *componentName* **except** *optionName optionname ...* Description ----------- The **delegate** command is used inside an **[incr Tcl]** extendedclass/widget/widgetadaptor definition to delegate methods/procs/options to other objects for handling. **delegate method** *methodName* **to** *componentName* ?**as** *targetName*? This form of delegate method delegates method methodName to component componentName. That is, when method methdoNameame is called on an instance of this type, the method and its arguments will be passed to the named component's command instead. That is, the following statement ``` delegate method wag to tail ``` is roughly equivalent to this explicitly defined method: ``` method wag {args} { uplevel $tail wag $args } ``` The optional **as** clause allows you to specify the delegated method name and possibly add some arguments: ``` delegate method wagtail to tail as "wag briskly" ``` A method cannot be both locally defined and delegated. **delegate method** *methodName* ?**to** *componentName*? **using** *pattern* In this form of the delegate statement, the **using** clause is used to specify the precise form of the command to which method name name is delegated. The **to** clause is optional, since the chosen command might not involve any particular component. The value of the using clause is a list that may contain any or all of the following substitution codes; these codes are substituted with the described value to build the delegated command prefix. Note that the following two statements are equivalent: ``` delegate method wag to tail delegate method wag to tail using "%c %m" ``` Each element of the list becomes a single element of the delegated command --it is never reparsed as a string. Substitutions: **%%** This is replaced with a single "%". Thus, to pass the string "%c" to the command as an argument, you'd write "%%c". **%c** This is replaced with the named component's command. **%j** This is replaced by the method name; if the name consists of multiple tokens, they are joined by underscores ("\_"). **%m** This is replaced with the final token of the method name; if the method name has one token, this is identical to **%M**. **%M** This is replaced by the method name; if the name consists of multiple tokens, they are joined by space characters. **%n** This is replaced with the name of the instance's private namespace. **%s** This is replaced with the name of the instance command. **%t** This is replaced with the fully qualified type name. **%w** This is replaced with the original name of the instance command; for Itcl widgets and widget adaptors, it will be the Tk window name. It remains constant, even if the instance command is renamed. **delegate method** *\** ?**to** *componentName*? ?**using** *pattern*? ?**except** *methodName methodName ...*? In this form all unknown method names are delegeted to the specified component. The except clause can be used to specify a list of exceptions, i.e., method names that will not be so delegated. The using clause is defined as given above. In this form, the statement must contain the to clause, the using clause, or both. In fact, the "\*" can be a list of two or more tokens whose last element is "\*", as in the following example: ``` delegate method {tail *} to tail ``` This implicitly defines the method tail whose subcommands will be delegated to the tail component. The definitions for **delegate proc** ... are the same as for method, the only difference being, that this is for procs. **delegate option** *namespec* **to** *comp* **delegate option namespec to comp as target** **delegate option \* to** *comp* **delegate option \* to** *comp* **except** *exceptions* Defines a delegated option; the namespec is defined as for the option statement. When the configure, configurelist, or cget instance method is used to set or retrieve the option's value, the equivalent configure or cget command will be applied to the component as though the option was defined with the following **-configuremethod** and **-cgetmethod**: ``` method ConfigureMethod {option value} { $comp configure $option $value } method CgetMethod {option} { return [$comp cget $option] } ``` Note that delegated options never appear in the **itcl\_options** array. If the as clause is specified, then the target option name is used in place of name. **delegate** *option* **\*** ?**except** *optionName optionName ...*? This form delegates all unknown options to the specified component. The except clause can be used to specify a list of exceptions, i.e., option names that will not be so delegated. **Warning:** options can only be delegated to a component if it supports the **configure** and **cget** instance methods. An option cannot be both locally defined and delegated. TBD: Continue from here. Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/ItclCmd/itcldelegate.htm>
programming_docs
tcl_tk scope scope ===== Name ---- itcl::scope — capture the namespace context for a variable Synopsis -------- **itcl::scope** *name* Description ----------- Creates a scoped value for the specified *name*, which must be a variable name. If the *name* is an instance variable, then the scope command returns a name which will resolve in any context as an instance variable belonging to *object*. The precise format of this name is an internal detail to Itcl. Use of such a scoped value makes it possible to use instance variables in conjunction with widgets. For example, if you have an object with a private variable x, and you can use x in conjunction with the -textvariable option of an entry widget. Before itcl3.0, only common variables could be used in this manner. If the *name* is not an instance variable, then it must be a common variable or a global variable. In that case, the scope command returns the fully qualified name of the variable, e.g., ::foo::bar::x. If the *name* is not recognized as a variable, the scope command returns an error. Ordinary variable names refer to variables in the global namespace. A scoped value captures a variable name together with its namespace context in a way that allows it to be referenced properly later. It is needed, for example, to wrap up variable names when a Tk widget is used within a namespace: ``` namespace foo { private variable mode 1 radiobutton .rb1 -text "Mode #1" -variable [scope mode] -value 1 pack .rb1 radiobutton .rb2 -text "Mode #2" -variable [scope mode] -value 2 pack .rb2 } ``` Radiobuttons .rb1 and .rb2 interact via the variable "mode" contained in the namespace "foo". The **scope** command guarantees this by returning the fully qualified variable name ::foo::mode. You should never use the @itcl syntax directly. For example, it is a bad idea to write code like this: ``` set {@itcl ::fred x} 3 puts "value = ${@itcl ::fred x}" ``` Instead, you should always use the scope command to generate the variable name dynamically. Then, you can pass that name to a widget or to any other bit of code in your program. Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/ItclCmd/scope.htm> tcl_tk itclextendedclass itclextendedclass ================= [NAME](itclextendedclass.htm#M2) itcl::extendedclass — create a extendedclass of objects [WARNING!](itclextendedclass.htm#M3) [SYNOPSIS](itclextendedclass.htm#M4) [DESCRIPTION](itclextendedclass.htm#M5) [CLASS DEFINITIONS](itclextendedclass.htm#M6) [**extendedclass** *extendedclassName definition*](itclextendedclass.htm#M7) [**inherit** *baseExtendedclass* ?*baseExtendedclass*...?](itclextendedclass.htm#M8) [**constructor** *args* ?*init*? *body*](itclextendedclass.htm#M9) [**destructor** *body*](itclextendedclass.htm#M10) [**method** *name* ?*args*? ?*body*?](itclextendedclass.htm#M11) [**proc** *name* ?*args*? ?*body*?](itclextendedclass.htm#M12) [**variable** *varName* ?*init*? ?*config*?](itclextendedclass.htm#M13) [**common** *varName* ?*init*?](itclextendedclass.htm#M14) [**public** *command* ?*arg arg ...*?](itclextendedclass.htm#M15) [**protected** *command* ?*arg arg ...*?](itclextendedclass.htm#M16) [**private** *command* ?*arg arg ...*?](itclextendedclass.htm#M17) [CLASS USAGE](itclextendedclass.htm#M18) [*extendedclassName objName* ?*args...*?](itclextendedclass.htm#M19) [OBJECT USAGE](itclextendedclass.htm#M20) [*objName method* ?*args...*?](itclextendedclass.htm#M21) [BUILT-IN METHODS](itclextendedclass.htm#M22) [*objName* **cget option**](itclextendedclass.htm#M23) [*objName* **configure** ?*option*? ?*value option value ...*?](itclextendedclass.htm#M24) [*objName* **isa** *extendedclassName*](itclextendedclass.htm#M25) [*objName* **info** *option* ?*args...*?](itclextendedclass.htm#M26) [*objName* **info extendedclass**](itclextendedclass.htm#M27) [*objName* **info inherit**](itclextendedclass.htm#M28) [*objName* **info heritage**](itclextendedclass.htm#M29) [*objName* **info function** ?*cmdName*? ?**-protection**? ?**-type**? ?**-name**? ?**-args**? ?**-body**?](itclextendedclass.htm#M30) [*objName* **info variable** ?*varName*? ?**-protection**? ?**-type**? ?**-name**? ?**-init**? ?**-value**? ?**-config**?](itclextendedclass.htm#M31) [CHAINING METHODS/PROCS](itclextendedclass.htm#M32) [AUTO-LOADING](itclextendedclass.htm#M33) [C PROCEDURES](itclextendedclass.htm#M34) [KEYWORDS](itclextendedclass.htm#M35) Name ---- itcl::extendedclass — create a extendedclass of objects Warning! -------- This is new functionality in [incr Tcl] where the API can still change!! Synopsis -------- **itcl::extendedclass** *extendedclassName* **{** **inherit** *baseExtendedclass* ?*baseExtendedclass*...? **constructor** *args* ?*init*? *body* **destructor** *body* **public method** *name* ?*args*? ?*body*? **protected method** *name* ?*args*? ?*body*? **private method** *name* ?*args*? ?*body*? **public proc** *name* ?*args*? ?*body*? **protected proc** *name* ?*args*? ?*body*? **private proc** *name* ?*args*? ?*body*? **public variable** *varName* ?*init*? ?*config*? **protected variable** *varName* ?*init*? ?*config*? **private variable** *varName* ?*init*? ?*config*? **public common** *varName* ?*init*? **protected common** *varName* ?*init*? **private common** *varName* ?*init*? **public** *command* ?*arg arg ...*? **protected** *command* ?*arg arg ...*? **private** *command* ?*arg arg ...*? **<delegation info>** see delegation page **<option info>** see option page **set** *varName* ?*value*? **[array](../tclcmd/array.htm)** *option* ?*arg arg ...*? **}** *extendedclassName objName* ?*arg arg ...*? *objName method* ?*arg arg ...*? *extendedclassName::proc* ?*arg arg ...*? Description ----------- The fundamental construct in **[incr Tcl]** is the extendedclass definition. Each extendedclass acts as a template for actual objects that can be created. The extendedclass itself is a namespace which contains things common to all objects. Each object has its own unique bundle of data which contains instances of the "variables" defined in the extendedclass definition. Each object also has a built-in variable named "this", which contains the name of the object. Extendedclasses can also have "common" data members that are shared by all objects in a extendedclass. Two types of functions can be included in the extendedclass definition. "Methods" are functions which operate on a specific object, and therefore have access to both "variables" and "common" data members. "Procs" are ordinary procedures in the extendedclass namespace, and only have access to "common" data members. If the body of any method or proc starts with "**@**", it is treated as the symbolic name for a C procedure. Otherwise, it is treated as a Tcl code script. See below for details on registering and using C procedures. A extendedclass can only be defined once, although the bodies of extendedclass methods and procs can be defined again and again for interactive debugging. See the **body** and **configbody** commands for details. Each namespace can have its own collection of objects and extendedclasses. The list of extendedclasses available in the current context can be queried using the "**[itcl::find extendedclasses](find.htm)**" command, and the list of objects, with the "**[itcl::find objects](find.htm)**" command. A extendedclass can be deleted using the "**delete extendedclass**" command. Individual objects can be deleted using the "**delete object**" command. Class definitions ----------------- **extendedclass** *extendedclassName definition* Provides the definition for a extendedclass named *extendedclassName*. If the extendedclass *extendedclassName* already exists, or if a command called *extendedclassName* exists in the current namespace context, this command returns an error. If the extendedclass definition is successfully parsed, *extendedclassName* becomes a command in the current context, handling the creation of objects for this extendedclass. The extendedclass *definition* is evaluated as a series of Tcl statements that define elements within the extendedclass. The following extendedclass definition commands are recognized: **inherit** *baseExtendedclass* ?*baseExtendedclass*...? Causes the current extendedclass to inherit characteristics from one or more base extendedclasses. Extendedclasses must have been defined by a previous **extendedclass** command, or must be available to the auto-loading facility (see "AUTO-LOADING" below). A single extendedclass definition can contain no more than one **inherit** command. The order of *baseExtendedclass* names in the **inherit** list affects the name resolution for extendedclass members. When the same member name appears in two or more base extendedclasses, the base extendedclass that appears first in the **inherit** list takes precedence. For example, if extendedclasses "Foo" and "Bar" both contain the member "x", and if another extendedclass has the "**inherit**" statement: ``` inherit Foo Bar ``` then the name "x" means "Foo::x". Other inherited members named "x" must be referenced with their explicit name, like "Bar::x". **constructor** *args* ?*init*? *body* Declares the *args* argument list and *body* used for the constructor, which is automatically invoked whenever an object is created. Before the *body* is executed, the optional *init* statement is used to invoke any base extendedclass constructors that require arguments. Variables in the *args* specification can be accessed in the *init* code fragment, and passed to base extendedclass constructors. After evaluating the *init* statement, any base extendedclass constructors that have not been executed are invoked automatically without arguments. This ensures that all base extendedclasses are fully constructed before the constructor *body* is executed. By default, this scheme causes constructors to be invoked in order from least- to most-specific. This is exactly the opposite of the order that extendedclasses are reported by the **[info heritage](../tclcmd/info.htm)** command. If construction is successful, the constructor always returns the object name-regardless of how the *body* is defined-and the object name becomes a command in the current namespace context. If construction fails, an error message is returned. **destructor** *body* Declares the *body* used for the destructor, which is automatically invoked when an object is deleted. If the destructor is successful, the object data is destroyed and the object name is removed as a command from the interpreter. If destruction fails, an error message is returned and the object remains. When an object is destroyed, all destructors in its extendedclass hierarchy are invoked in order from most- to least-specific. This is the order that the extendedclasses are reported by the "**[info heritage](../tclcmd/info.htm)**" command, and it is exactly the opposite of the default constructor order. **method** *name* ?*args*? ?*body*? Declares a method called *name*. When the method *body* is executed, it will have automatic access to object-specific variables and common data members. If the *args* list is specified, it establishes the usage information for this method. The **body** command can be used to redefine the method body, but the *args* list must match this specification. Within the body of another extendedclass method, a method can be invoked like any other command-simply by using its name. Outside of the extendedclass context, the method name must be prefaced an object name, which provides the context for the data that it manipulates. Methods in a base extendedclass that are redefined in the current extendedclass, or hidden by another base extendedclass, can be qualified using the "*extendedclassName*::*method*" syntax. **proc** *name* ?*args*? ?*body*? Declares a proc called *name*. A proc is an ordinary procedure within the extendedclass namespace. Unlike a method, a proc is invoked without referring to a specific object. When the proc *body* is executed, it will have automatic access only to common data members. If the *args* list is specified, it establishes the usage information for this proc. The **body** command can be used to redefine the proc body, but the *args* list must match this specification. Within the body of another extendedclass method or proc, a proc can be invoked like any other command-simply by using its name. In any other namespace context, the proc is invoked using a qualified name like "*extendedclassName***::***proc*". Procs in a base extendedclass that are redefined in the current extendedclass, or hidden by another base extendedclass, can also be accessed via their qualified name. **variable** *varName* ?*init*? ?*config*? Defines an object-specific variable named *varName*. All object-specific variables are automatically available in extendedclass methods. They need not be declared with anything like the **[global](../tclcmd/global.htm)** command. If the optional *init* string is specified, it is used as the initial value of the variable when a new object is created. Initialization forces the variable to be a simple scalar value; uninitialized variables, on the other hand, can be set within the constructor and used as arrays. The optional *config* script is only allowed for public variables. If specified, this code fragment is executed whenever a public variable is modified by the built-in "configure" method. The *config* script can also be specified outside of the extendedclass definition using the **configbody** command. **common** *varName* ?*init*? Declares a common variable named *varName*. Common variables reside in the extendedclass namespace and are shared by all objects belonging to the extendedclass. They are just like global variables, except that they need not be declared with the usual **[global](../tclcmd/global.htm)** command. They are automatically visible in all extendedclass methods and procs. If the optional *init* string is specified, it is used as the initial value of the variable. Initialization forces the variable to be a simple scalar value; uninitialized variables, on the other hand, can be set with subsequent **[set](../tclcmd/set.htm)** and **[array](../tclcmd/array.htm)** commands and used as arrays. Once a common data member has been defined, it can be set using **[set](../tclcmd/set.htm)** and **[array](../tclcmd/array.htm)** commands within the extendedclass definition. This allows common data members to be initialized as arrays. For example: ``` itcl::extendedclass Foo { common boolean set boolean(true) 1 set boolean(false) 0 } ``` Note that if common data members are initialized within the constructor, they get initialized again and again whenever new objects are created. **public** *command* ?*arg arg ...*? **protected** *command* ?*arg arg ...*? **private** *command* ?*arg arg ...*? These commands are used to set the protection level for extendedclass members that are created when *command* is evaluated. The *command* is usually **method**, **[proc](../tclcmd/proc.htm)**, **[variable](../tclcmd/variable.htm)** or**common**, and the remaining *arg*'s complete the member definition. However, *command* can also be a script containing many different member definitions, and the protection level will apply to all of the members that are created. Class usage ----------- Once a extendedclass has been defined, the extendedclass name can be used as a command to create new objects belonging to the extendedclass. *extendedclassName objName* ?*args...*? Creates a new object in extendedclass *extendedclassName* with the name *objName*. Remaining arguments are passed to the constructor of the most-specific extendedclass. This in turn passes arguments to base extendedclass constructors before invoking its own body of commands. If construction is successful, a command called *objName* is created in the current namespace context, and *objName* is returned as the result of this operation. If an error is encountered during construction, the destructors are automatically invoked to free any resources that have been allocated, the object is deleted, and an error is returned. If *objName* contains the string "**#auto**", that string is replaced with an automatically generated name. Names have the form *extendedclassName<number>*, where the *extendedclassName* part is modified to start with a lowercase letter. In extendedclass "Toaster", for example, the "**#auto**" specification would produce names like toaster0, toaster1, etc. Note that "**#auto**" can be also be buried within an object name: ``` fileselectiondialog .foo.bar.#auto -background red ``` This would generate an object named ".foo.bar.fileselectiondialog0". Object usage ------------ Once an object has been created, the object name can be used as a command to invoke methods that operate on the object. *objName method* ?*args...*? Invokes a method named *method* on an object named *objName*. Remaining arguments are passed to the argument list for the method. The method name can be "constructor", "destructor", any method name appearing in the extendedclass definition, or any of the following built-in methods. Built-in methods ---------------- *objName* **cget option** Provides access to public variables as configuration options. This mimics the behavior of the usual "cget" operation for Tk widgets. The *option* argument is a string of the form "**-***varName*", and this method returns the current value of the public variable *varName*. *objName* **configure** ?*option*? ?*value option value ...*? Provides access to public variables as configuration options. This mimics the behavior of the usual "configure" operation for Tk widgets. With no arguments, this method returns a list of lists describing all of the public variables. Each list has three elements: the variable name, its initial value and its current value. If a single *option* of the form "**-***varName*" is specified, then this method returns the information for that one variable. Otherwise, the arguments are treated as *option*/*value* pairs assigning new values to public variables. Each variable is assigned its new value, and if it has any "config" code associated with it, it is executed in the context of the extendedclass where it was defined. If the "config" code generates an error, the variable is set back to its previous value, and the **configure** method returns an error. *objName* **isa** *extendedclassName* Returns non-zero if the given *extendedclassName* can be found in the object's heritage, and zero otherwise. *objName* **info** *option* ?*args...*? Returns information related to a particular object named *objName*, or to its extendedclass definition. The *option* parameter includes the following things, as well as the options recognized by the usual Tcl "info" command: *objName* **info extendedclass** Returns the name of the most-specific extendedclass for object *objName*. *objName* **info inherit** Returns the list of base extendedclasses as they were defined in the "**inherit**" command, or an empty string if this extendedclass has no base extendedclasses. *objName* **info heritage** Returns the current extendedclass name and the entire list of base extendedclasses in the order that they are traversed for member lookup and object destruction. *objName* **info function** ?*cmdName*? ?**-protection**? ?**-type**? ?**-name**? ?**-args**? ?**-body**? With no arguments, this command returns a list of all extendedclass methods and procs. If *cmdName* is specified, it returns information for a specific method or proc. If no flags are specified, this command returns a list with the following elements: the protection level, the type (method/proc), the qualified name, the argument list and the body. Flags can be used to request specific elements from this list. *objName* **info variable** ?*varName*? ?**-protection**? ?**-type**? ?**-name**? ?**-init**? ?**-value**? ?**-config**? With no arguments, this command returns a list of all object-specific variables and common data members. If *varName* is specified, it returns information for a specific data member. If no flags are specified, this command returns a list with the following elements: the protection level, the type (variable/common), the qualified name, the initial value, and the current value. If *varName* is a public variable, the "config" code is included on this list. Flags can be used to request specific elements from this list. Chaining methods/procs ---------------------- Sometimes a base extendedclass has a method or proc that is redefined with the same name in a derived extendedclass. This is a way of making the derived extendedclass handle the same operations as the base extendedclass, but with its own specialized behavior. For example, suppose we have a Toaster extendedclass that looks like this: ``` itcl::extendedclass Toaster { variable crumbs 0 method toast {nslices} { if {$crumbs > 50} { error "== FIRE! FIRE! ==" } set crumbs [expr $crumbs+4*$nslices] } method clean {} { set crumbs 0 } } ``` We might create another extendedclass like SmartToaster that redefines the "toast" method. If we want to access the base extendedclass method, we can qualify it with the base extendedclass name, to avoid ambiguity: ``` itcl::extendedclass SmartToaster { inherit Toaster method toast {nslices} { if {$crumbs > 40} { clean } return [Toaster::toast $nslices] } } ``` Instead of hard-coding the base extendedclass name, we can use the "chain" command like this: ``` itcl::extendedclass SmartToaster { inherit Toaster method toast {nslices} { if {$crumbs > 40} { clean } return [chain $nslices] } } ``` The chain command searches through the extendedclass hierarchy for a slightly more generic (base extendedclass) implementation of a method or proc, and invokes it with the specified arguments. It starts at the current extendedclass context and searches through base extendedclasses in the order that they are reported by the "info heritage" command. If another implementation is not found, this command does nothing and returns the null string. Auto-loading ------------ Extendedclass definitions need not be loaded explicitly; they can be loaded as needed by the usual Tcl auto-loading facility. Each directory containing extendedclass definition files should have an accompanying "tclIndex" file. Each line in this file identifies a Tcl procedure or **[incr Tcl]** extendedclass definition and the file where the definition can be found. For example, suppose a directory contains the definitions for extendedclasses "Toaster" and "SmartToaster". Then the "tclIndex" file for this directory would look like: ``` # Tcl autoload index file, version 2.0 for [incr Tcl] # This file is generated by the "auto_mkindex" command # and sourced to set up indexing information for one or # more commands. Typically each line is a command that # sets an element in the auto_index array, where the # element name is the name of a command and the value is # a script that loads the command. set auto_index(::Toaster) "source $dir/Toaster.itcl" set auto_index(::SmartToaster) "source $dir/SmartToaster.itcl" ``` The **[auto\_mkindex](../tclcmd/library.htm)** command is used to automatically generate "tclIndex" files. The auto-loader must be made aware of this directory by appending the directory name to the "auto\_path" variable. When this is in place, extendedclasses will be auto-loaded as needed when used in an application. C procedures ------------ C procedures can be integrated into an **[incr Tcl]** extendedclass definition to implement methods, procs, and the "config" code for public variables. Any body that starts with "**@**" is treated as the symbolic name for a C procedure. Symbolic names are established by registering procedures via **[Itcl\_RegisterC()](https://www.tcl.tk/man/tcl/ItclLib/RegisterC.htm)**. This is usually done in the **[Tcl\_AppInit()](https://www.tcl.tk/man/tcl/TclLib/AppInit.htm)** procedure, which is automatically called when the interpreter starts up. In the following example, the procedure My\_FooCmd() is registered with the symbolic name "foo". This procedure can be referenced in the **body** command as "@foo". ``` int [Tcl\_AppInit](https://www.tcl.tk/man/tcl/TclLib/AppInit.htm)(interp) [Tcl\_Interp](https://www.tcl.tk/man/tcl/TclLib/Interp.htm) *interp; /* Interpreter for application. */ { if (Itcl_Init(interp) == TCL_ERROR) { return TCL_ERROR; } if ([Itcl\_RegisterC](https://www.tcl.tk/man/tcl/ItclLib/RegisterC.htm)(interp, "foo", My_FooCmd) != TCL_OK) { return TCL_ERROR; } } ``` C procedures are implemented just like ordinary Tcl commands. See the **CrtCommand** man page for details. Within the procedure, extendedclass data members can be accessed like ordinary variables using **[Tcl\_SetVar()](https://www.tcl.tk/man/tcl/TclLib/SetVar.htm)**, **[Tcl\_GetVar()](https://www.tcl.tk/man/tcl/TclLib/SetVar.htm)**, **[Tcl\_TraceVar()](https://www.tcl.tk/man/tcl/TclLib/TraceVar.htm)**, etc. Extendedclass methods and procs can be executed like ordinary commands using **[Tcl\_Eval()](https://www.tcl.tk/man/tcl/TclLib/Eval.htm)**. **[incr Tcl]** makes this possible by automatically setting up the context before executing the C procedure. This scheme provides a natural migration path for code development. Extendedclasses can be developed quickly using Tcl code to implement the bodies. An entire application can be built and tested. When necessary, individual bodies can be implemented with C code to improve performance. Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/ItclCmd/itclextendedclass.htm>
programming_docs
tcl_tk class class ===== [NAME](class.htm#M2) itcl::class — create a class of objects [SYNOPSIS](class.htm#M3) [DESCRIPTION](class.htm#M4) [CLASS DEFINITIONS](class.htm#M5) [**class** *className definition*](class.htm#M6) [**inherit** *baseClass* ?*baseClass*...?](class.htm#M7) [**constructor** *args* ?*init*? *body*](class.htm#M8) [**destructor** *body*](class.htm#M9) [**method** *name* ?*args*? ?*body*?](class.htm#M10) [**proc** *name* ?*args*? ?*body*?](class.htm#M11) [**variable** *varName* ?*init*? ?*config*?](class.htm#M12) [**common** *varName* ?*init*?](class.htm#M13) [**public** *command* ?*arg arg ...*?](class.htm#M14) [**protected** *command* ?*arg arg ...*?](class.htm#M15) [**private** *command* ?*arg arg ...*?](class.htm#M16) [CLASS USAGE](class.htm#M17) [*className objName* ?*args...*?](class.htm#M18) [OBJECT USAGE](class.htm#M19) [*objName method* ?*args...*?](class.htm#M20) [BUILT-IN METHODS](class.htm#M21) [*objName* **cget option**](class.htm#M22) [*objName* **configure** ?*option*? ?*value option value ...*?](class.htm#M23) [*objName* **isa** *className*](class.htm#M24) [*objName* **info** *option* ?*args...*?](class.htm#M25) [*objName* **info class**](class.htm#M26) [*objName* **info inherit**](class.htm#M27) [*objName* **info heritage**](class.htm#M28) [*objName* **info function** ?*cmdName*? ?**-protection**? ?**-type**? ?**-name**? ?**-args**? ?**-body**?](class.htm#M29) [*objName* **info variable** ?*varName*? ?**-protection**? ?**-type**? ?**-name**? ?**-init**? ?**-value**? ?**-config**?](class.htm#M30) [CHAINING METHODS/PROCS](class.htm#M31) [AUTO-LOADING](class.htm#M32) [C PROCEDURES](class.htm#M33) [KEYWORDS](class.htm#M34) Name ---- itcl::class — create a class of objects Synopsis -------- **itcl::class** *className* **{** **inherit** *baseClass* ?*baseClass*...? **constructor** *args* ?*init*? *body* **destructor** *body* **method** *name* ?*args*? ?*body*? **proc** *name* ?*args*? ?*body*? **variable** *varName* ?*init*? ?*config*? **common** *varName* ?*init*? **public** *command* ?*arg arg ...*? **protected** *command* ?*arg arg ...*? **private** *command* ?*arg arg ...*? **set** *varName* ?*value*? **[array](../tclcmd/array.htm)** *option* ?*arg arg ...*? **}** *className objName* ?*arg arg ...*? *objName method* ?*arg arg ...*? *className::proc* ?*arg arg ...*? Description ----------- The fundamental construct in **[incr Tcl]** is the class definition. Each class acts as a template for actual objects that can be created. The class itself is a namespace which contains things common to all objects. Each object has its own unique bundle of data which contains instances of the "variables" defined in the class definition. Each object also has a built-in variable named "this", which contains the name of the object. Classes can also have "common" data members that are shared by all objects in a class. Two types of functions can be included in the class definition. "Methods" are functions which operate on a specific object, and therefore have access to both "variables" and "common" data members. "Procs" are ordinary procedures in the class namespace, and only have access to "common" data members. If the body of any method or proc starts with "**@**", it is treated as the symbolic name for a C procedure. Otherwise, it is treated as a Tcl code script. See below for details on registering and using C procedures. A class can only be defined once, although the bodies of class methods and procs can be defined again and again for interactive debugging. See the **body** and **configbody** commands for details. Each namespace can have its own collection of objects and classes. The list of classes available in the current context can be queried using the "**[itcl::find classes](find.htm)**" command, and the list of objects, with the "**[itcl::find objects](find.htm)**" command. A class can be deleted using the "**delete class**" command. Individual objects can be deleted using the "**delete object**" command. Class definitions ----------------- **class** *className definition* Provides the definition for a class named *className*. If the class *className* already exists, or if a command called *className* exists in the current namespace context, this command returns an error. If the class definition is successfully parsed, *className* becomes a command in the current context, handling the creation of objects for this class. The class *definition* is evaluated as a series of Tcl statements that define elements within the class. The following class definition commands are recognized: **inherit** *baseClass* ?*baseClass*...? Causes the current class to inherit characteristics from one or more base classes. Classes must have been defined by a previous **class** command, or must be available to the auto-loading facility (see "AUTO-LOADING" below). A single class definition can contain no more than one **inherit** command. The order of *baseClass* names in the **inherit** list affects the name resolution for class members. When the same member name appears in two or more base classes, the base class that appears first in the **inherit** list takes precedence. For example, if classes "Foo" and "Bar" both contain the member "x", and if another class has the "**inherit**" statement: ``` inherit Foo Bar ``` then the name "x" means "Foo::x". Other inherited members named "x" must be referenced with their explicit name, like "Bar::x". **constructor** *args* ?*init*? *body* Declares the *args* argument list and *body* used for the constructor, which is automatically invoked whenever an object is created. Before the *body* is executed, the optional *init* statement is used to invoke any base class constructors that require arguments. Variables in the *args* specification can be accessed in the *init* code fragment, and passed to base class constructors. After evaluating the *init* statement, any base class constructors that have not been executed are invoked automatically without arguments. This ensures that all base classes are fully constructed before the constructor *body* is executed. By default, this scheme causes constructors to be invoked in order from least- to most-specific. This is exactly the opposite of the order that classes are reported by the **[info heritage](../tclcmd/info.htm)** command. If construction is successful, the constructor always returns the object name-regardless of how the *body* is defined-and the object name becomes a command in the current namespace context. If construction fails, an error message is returned. **destructor** *body* Declares the *body* used for the destructor, which is automatically invoked when an object is deleted. If the destructor is successful, the object data is destroyed and the object name is removed as a command from the interpreter. If destruction fails, an error message is returned and the object remains. When an object is destroyed, all destructors in its class hierarchy are invoked in order from most- to least-specific. This is the order that the classes are reported by the "**[info heritage](../tclcmd/info.htm)**" command, and it is exactly the opposite of the default constructor order. **method** *name* ?*args*? ?*body*? Declares a method called *name*. When the method *body* is executed, it will have automatic access to object-specific variables and common data members. If the *args* list is specified, it establishes the usage information for this method. The **body** command can be used to redefine the method body, but the *args* list must match this specification. Within the body of another class method, a method can be invoked like any other command-simply by using its name. Outside of the class context, the method name must be prefaced an object name, which provides the context for the data that it manipulates. Methods in a base class that are redefined in the current class, or hidden by another base class, can be qualified using the "*className*::*method*" syntax. **proc** *name* ?*args*? ?*body*? Declares a proc called *name*. A proc is an ordinary procedure within the class namespace. Unlike a method, a proc is invoked without referring to a specific object. When the proc *body* is executed, it will have automatic access only to common data members. If the *args* list is specified, it establishes the usage information for this proc. The **body** command can be used to redefine the proc body, but the *args* list must match this specification. Within the body of another class method or proc, a proc can be invoked like any other command-simply by using its name. In any other namespace context, the proc is invoked using a qualified name like "*className***::***proc*". Procs in a base class that are redefined in the current class, or hidden by another base class, can also be accessed via their qualified name. **variable** *varName* ?*init*? ?*config*? Defines an object-specific variable named *varName*. All object-specific variables are automatically available in class methods. They need not be declared with anything like the **[global](../tclcmd/global.htm)** command. If the optional *init* string is specified, it is used as the initial value of the variable when a new object is created. Initialization forces the variable to be a simple scalar value; uninitialized variables, on the other hand, can be set within the constructor and used as arrays. The optional *config* script is only allowed for public variables. If specified, this code fragment is executed whenever a public variable is modified by the built-in "configure" method. The *config* script can also be specified outside of the class definition using the **configbody** command. **common** *varName* ?*init*? Declares a common variable named *varName*. Common variables reside in the class namespace and are shared by all objects belonging to the class. They are just like global variables, except that they need not be declared with the usual **[global](../tclcmd/global.htm)** command. They are automatically visible in all class methods and procs. If the optional *init* string is specified, it is used as the initial value of the variable. Initialization forces the variable to be a simple scalar value; uninitialized variables, on the other hand, can be set with subsequent **[set](../tclcmd/set.htm)** and **[array](../tclcmd/array.htm)** commands and used as arrays. Once a common data member has been defined, it can be set using **[set](../tclcmd/set.htm)** and **[array](../tclcmd/array.htm)** commands within the class definition. This allows common data members to be initialized as arrays. For example: ``` itcl::class Foo { common boolean set boolean(true) 1 set boolean(false) 0 } ``` Note that if common data members are initialized within the constructor, they get initialized again and again whenever new objects are created. **public** *command* ?*arg arg ...*? **protected** *command* ?*arg arg ...*? **private** *command* ?*arg arg ...*? These commands are used to set the protection level for class members that are created when *command* is evaluated. The *command* is usually **method**, **[proc](../tclcmd/proc.htm)**, **[variable](../tclcmd/variable.htm)** or**common**, and the remaining *arg*'s complete the member definition. However, *command* can also be a script containing many different member definitions, and the protection level will apply to all of the members that are created. Class usage ----------- Once a class has been defined, the class name can be used as a command to create new objects belonging to the class. *className objName* ?*args...*? Creates a new object in class *className* with the name *objName*. Remaining arguments are passed to the constructor of the most-specific class. This in turn passes arguments to base class constructors before invoking its own body of commands. If construction is successful, a command called *objName* is created in the current namespace context, and *objName* is returned as the result of this operation. If an error is encountered during construction, the destructors are automatically invoked to free any resources that have been allocated, the object is deleted, and an error is returned. If *objName* contains the string "**#auto**", that string is replaced with an automatically generated name. Names have the form *className<number>*, where the *className* part is modified to start with a lowercase letter. In class "Toaster", for example, the "**#auto**" specification would produce names like toaster0, toaster1, etc. Note that "**#auto**" can be also be buried within an object name: ``` fileselectiondialog .foo.bar.#auto -background red ``` This would generate an object named ".foo.bar.fileselectiondialog0". Object usage ------------ Once an object has been created, the object name can be used as a command to invoke methods that operate on the object. *objName method* ?*args...*? Invokes a method named *method* on an object named *objName*. Remaining arguments are passed to the argument list for the method. The method name can be "constructor", "destructor", any method name appearing in the class definition, or any of the following built-in methods. Built-in methods ---------------- *objName* **cget option** Provides access to public variables as configuration options. This mimics the behavior of the usual "cget" operation for Tk widgets. The *option* argument is a string of the form "**-***varName*", and this method returns the current value of the public variable *varName*. *objName* **configure** ?*option*? ?*value option value ...*? Provides access to public variables as configuration options. This mimics the behavior of the usual "configure" operation for Tk widgets. With no arguments, this method returns a list of lists describing all of the public variables. Each list has three elements: the variable name, its initial value and its current value. If a single *option* of the form "**-***varName*" is specified, then this method returns the information for that one variable. Otherwise, the arguments are treated as *option*/*value* pairs assigning new values to public variables. Each variable is assigned its new value, and if it has any "config" code associated with it, it is executed in the context of the class where it was defined. If the "config" code generates an error, the variable is set back to its previous value, and the **configure** method returns an error. *objName* **isa** *className* Returns non-zero if the given *className* can be found in the object's heritage, and zero otherwise. *objName* **info** *option* ?*args...*? Returns information related to a particular object named *objName*, or to its class definition. The *option* parameter includes the following things, as well as the options recognized by the usual Tcl "info" command: *objName* **info class** Returns the name of the most-specific class for object *objName*. *objName* **info inherit** Returns the list of base classes as they were defined in the "**inherit**" command, or an empty string if this class has no base classes. *objName* **info heritage** Returns the current class name and the entire list of base classes in the order that they are traversed for member lookup and object destruction. *objName* **info function** ?*cmdName*? ?**-protection**? ?**-type**? ?**-name**? ?**-args**? ?**-body**? With no arguments, this command returns a list of all class methods and procs. If *cmdName* is specified, it returns information for a specific method or proc. If no flags are specified, this command returns a list with the following elements: the protection level, the type (method/proc), the qualified name, the argument list and the body. Flags can be used to request specific elements from this list. *objName* **info variable** ?*varName*? ?**-protection**? ?**-type**? ?**-name**? ?**-init**? ?**-value**? ?**-config**? With no arguments, this command returns a list of all object-specific variables and common data members. If *varName* is specified, it returns information for a specific data member. If no flags are specified, this command returns a list with the following elements: the protection level, the type (variable/common), the qualified name, the initial value, and the current value. If *varName* is a public variable, the "config" code is included on this list. Flags can be used to request specific elements from this list. Chaining methods/procs ---------------------- Sometimes a base class has a method or proc that is redefined with the same name in a derived class. This is a way of making the derived class handle the same operations as the base class, but with its own specialized behavior. For example, suppose we have a Toaster class that looks like this: ``` itcl::class Toaster { variable crumbs 0 method toast {nslices} { if {$crumbs > 50} { error "== FIRE! FIRE! ==" } set crumbs [expr $crumbs+4*$nslices] } method clean {} { set crumbs 0 } } ``` We might create another class like SmartToaster that redefines the "toast" method. If we want to access the base class method, we can qualify it with the base class name, to avoid ambiguity: ``` itcl::class SmartToaster { inherit Toaster method toast {nslices} { if {$crumbs > 40} { clean } return [Toaster::toast $nslices] } } ``` Instead of hard-coding the base class name, we can use the "chain" command like this: ``` itcl::class SmartToaster { inherit Toaster method toast {nslices} { if {$crumbs > 40} { clean } return [chain $nslices] } } ``` The chain command searches through the class hierarchy for a slightly more generic (base class) implementation of a method or proc, and invokes it with the specified arguments. It starts at the current class context and searches through base classes in the order that they are reported by the "info heritage" command. If another implementation is not found, this command does nothing and returns the null string. Auto-loading ------------ Class definitions need not be loaded explicitly; they can be loaded as needed by the usual Tcl auto-loading facility. Each directory containing class definition files should have an accompanying "tclIndex" file. Each line in this file identifies a Tcl procedure or **[incr Tcl]** class definition and the file where the definition can be found. For example, suppose a directory contains the definitions for classes "Toaster" and "SmartToaster". Then the "tclIndex" file for this directory would look like: ``` # Tcl autoload index file, version 2.0 for [incr Tcl] # This file is generated by the "auto_mkindex" command # and sourced to set up indexing information for one or # more commands. Typically each line is a command that # sets an element in the auto_index array, where the # element name is the name of a command and the value is # a script that loads the command. set auto_index(::Toaster) "source $dir/Toaster.itcl" set auto_index(::SmartToaster) "source $dir/SmartToaster.itcl" ``` The **[auto\_mkindex](../tclcmd/library.htm)** command is used to automatically generate "tclIndex" files. The auto-loader must be made aware of this directory by appending the directory name to the "auto\_path" variable. When this is in place, classes will be auto-loaded as needed when used in an application. C procedures ------------ C procedures can be integrated into an **[incr Tcl]** class definition to implement methods, procs, and the "config" code for public variables. Any body that starts with "**@**" is treated as the symbolic name for a C procedure. Symbolic names are established by registering procedures via **[Itcl\_RegisterC()](https://www.tcl.tk/man/tcl/ItclLib/RegisterC.htm)**. This is usually done in the **[Tcl\_AppInit()](https://www.tcl.tk/man/tcl/TclLib/AppInit.htm)** procedure, which is automatically called when the interpreter starts up. In the following example, the procedure My\_FooCmd() is registered with the symbolic name "foo". This procedure can be referenced in the **body** command as "@foo". ``` int [Tcl\_AppInit](https://www.tcl.tk/man/tcl/TclLib/AppInit.htm)(interp) [Tcl\_Interp](https://www.tcl.tk/man/tcl/TclLib/Interp.htm) *interp; /* Interpreter for application. */ { if (Itcl_Init(interp) == TCL_ERROR) { return TCL_ERROR; } if ([Itcl\_RegisterC](https://www.tcl.tk/man/tcl/ItclLib/RegisterC.htm)(interp, "foo", My_FooCmd) != TCL_OK) { return TCL_ERROR; } } ``` C procedures are implemented just like ordinary Tcl commands. See the **CrtCommand** man page for details. Within the procedure, class data members can be accessed like ordinary variables using **[Tcl\_SetVar()](https://www.tcl.tk/man/tcl/TclLib/SetVar.htm)**, **[Tcl\_GetVar()](https://www.tcl.tk/man/tcl/TclLib/SetVar.htm)**, **[Tcl\_TraceVar()](https://www.tcl.tk/man/tcl/TclLib/TraceVar.htm)**, etc. Class methods and procs can be executed like ordinary commands using **[Tcl\_Eval()](https://www.tcl.tk/man/tcl/TclLib/Eval.htm)**. **[incr Tcl]** makes this possible by automatically setting up the context before executing the C procedure. This scheme provides a natural migration path for code development. Classes can be developed quickly using Tcl code to implement the bodies. An entire application can be built and tested. When necessary, individual bodies can be implemented with C code to improve performance. Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/ItclCmd/class.htm>
programming_docs
tcl_tk itcl itcl ==== Name ---- itcl — object-oriented extensions to Tcl Description ----------- **[incr Tcl]** provides object-oriented extensions to Tcl, much as C++ provides object-oriented extensions to C. The emphasis of this work, however, is not to create a whiz-bang object-oriented programming environment. Rather, it is to support more structured programming practices in Tcl without changing the flavor of the language. More than anything else, **[incr Tcl]** provides a means of encapsulating related procedures together with their shared data in a namespace that is hidden from the outside world. It encourages better programming by promoting the object-oriented "library" mindset. It also allows for code re-use through inheritance. Classes ------- The fundamental construct in **[incr Tcl]** is the class definition. Each class acts as a template for actual objects that can be created. Each object has its own unique bundle of data, which contains instances of the "variables" defined in the class. Special procedures called "methods" are used to manipulate individual objects. Methods are just like the operations that are used to manipulate Tk widgets. The "**[button](../tkcmd/button.htm)**" widget, for example, has methods such as "flash" and "invoke" that cause a particular button to blink and invoke its command. Within the body of a method, the "variables" defined in the class are automatically available. They need not be declared with anything like the **[global](../tclcmd/global.htm)** command. Within another class method, a method can be invoked like any other command-simply by using its name. From any other context, the method name must be prefaced by an object name, which provides a context for the data that the method can access. Each class has its own namespace containing things that are common to all objects which belong to the class. For example, "common" data members are shared by all objects in the class. They are global variables that exist in the class namespace, but since they are included in the class definition, they need not be declared using the **[global](../tclcmd/global.htm)** command; they are automatically available to any code executing in the class context. A class can also create ordinary global variables, but these must be declared using the **[global](../tclcmd/global.htm)** command each time they are used. Classes can also have ordinary procedures declared as "procs". Within another class method or proc, a proc can be invoked like any other command-simply by using its name. From any other context, the procedure name should be qualified with the class namespace like "*className***::***proc*". Class procs execute in the class context, and therefore have automatic access to all "common" data members. However, they cannot access object-specific "variables", since they are invoked without reference to any specific object. They are usually used to perform generic operations which affect all objects belonging to the class. Each of the elements in a class can be declared "public", "protected" or "private". Public elements can be accessed by the class, by derived classes (other classes that inherit this class), and by external clients that use the class. Protected elements can be accessed by the class, and by derived classes. Private elements are only accessible in the class where they are defined. The "public" elements within a class define its interface to the external world. Public methods define the operations that can be used to manipulate an object. Public variables are recognized as configuration options by the "configure" and "cget" methods that are built into each class. The public interface says *what* an object will do but not *how* it will do it. Protected and private members, along with the bodies of class methods and procs, provide the implementation details. Insulating the application developer from these details leaves the class designer free to change them at any time, without warning, and without affecting programs that rely on the class. It is precisely this encapsulation that makes object-oriented programs easier to understand and maintain. The fact that **[incr Tcl]** objects look like Tk widgets is no accident. **[incr Tcl]** was designed this way, to blend naturally into a Tcl/Tk application. But **[incr Tcl]** extends the Tk paradigm from being merely object-based to being fully object-oriented. An object-oriented system supports inheritance, allowing classes to share common behaviors by inheriting them from an ancestor or base class. Having a base class as a common abstraction allows a programmer to treat related classes in a similar manner. For example, a toaster and a blender perform different (specialized) functions, but both share the abstraction of being appliances. By abstracting common behaviors into a base class, code can be *shared* rather than *copied*. The resulting application is easier to understand and maintain, and derived classes (e.g., specialized appliances) can be added or removed more easily. This description was merely a brief overview of object-oriented programming and **[incr Tcl]**. A more tutorial introduction is presented in the paper included with this distribution. See the **class** command for more details on creating and using classes. Namespaces ---------- **[incr Tcl]** now includes a complete namespace facility. A namespace is a collection of commands and global variables that is kept apart from the usual global scope. This allows Tcl code libraries to be packaged in a well-defined manner, and prevents unwanted interactions with other libraries. A namespace can also have child namespaces within it, so one library can contain its own private copy of many other libraries. A namespace can also be used to wrap up a group of related classes. The global scope (named "::") is the root namespace for an interpreter; all other namespaces are contained within it. See the **[namespace](../tclcmd/namespace.htm)** command for details on creating and using namespaces. Mega-widgets ------------ Mega-widgets are high-level widgets that are constructed using Tk widgets as component parts, usually without any C code. A fileselectionbox, for example, may have a few listboxes, some entry widgets and some control buttons. These individual widgets are put together in a way that makes them act like one big widget. **[incr Tk]** is a framework for building mega-widgets. It uses **[incr Tcl]** to support the object paradigm, and adds base classes which provide default widget behaviors. See the **itk** man page for more details. **[incr Widgets]** is a library of mega-widgets built using **[incr Tk]**. It contains more than 30 different widget classes that can be used right out of the box to build Tcl/Tk applications. Each widget class has its own man page describing the features available. Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/ItclCmd/itcl.htm> tcl_tk is is == Name ---- itcl::is — test argument to see if it is a class or an object Synopsis -------- **[itcl::is](is.htm)** *option* ?*arg arg ...*? Description ----------- The **is** command is used to check if the argument given is a class or an object; depending on the option given. If the argument if a class or object, then 1 is returned. Otherwise, 0 is returned. The **is** command also recognizes the commands wrapped in the itcl **code** command. The *option* argument determines what action is carried out by the command. The legal *options* (which may be abbreviated) are: **is class** *command* Returns 1 if command is a class, and returns 0 otherwise. The fully qualified name of the class needs to be given as the *command* argument. So, if a class resides in a namespace, then the namespace needs to be specified as well. So, if a class **C** resides in a namespace **N**, then the command should be called like: ``` **is N::C** or **is ::N::C** ``` **is** object ?**-class** *className*? *command* Returns 1 if *command* is an object, and returns 0 otherwise. If the optional "**-class**" parameter is specified, then the *command* will be checked within the context of the class given. Note that *className* has to exist. If not, then an error will be given. So, if *className* is uncertain to be a class, then the programmer will need to check it's existance beforehand, or wrap it in a catch statement. So, if **c** is an object in the class **C**, in namespace N then these are the possibilities (all return 1): ``` set obj [N::C c] itcl::is object N::c itcl::is object c itcl::is object $obj itcl::is object [itcl::code c] ``` Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/ItclCmd/is.htm> tcl_tk ensemble ensemble ======== Name ---- itcl::ensemble — create or modify a composite command Synopsis -------- **itcl::ensemble** *ensName* ?*command arg arg...*? or **ensemble** *ensName* { **part** *partName args body* *...* **ensemble** *partName* { **part** *subPartName args body* **part** *subPartName args body* *...* } } Description ----------- The **ensemble** command is used to create or modify a composite command. See the section **[WHAT IS AN ENSEMBLE?](#M5)** below for a brief overview of ensembles. If the **ensemble** command finds an existing ensemble called *ensName*, it updates that ensemble. Otherwise, it creates an ensemble called *ensName*. If the *ensName* is a simple name like "foo", then an ensemble command named "foo" is added to the current namespace context. If a command named "foo" already exists in that context, then it is deleted. If the *ensName* contains namespace qualifiers like "a::b::foo", then the namespace path is resolved, and the ensemble command is added that namespace context. Parent namespaces like "a" and "b" are created automatically, as needed. If the *ensName* contains spaces like "a::b::foo bar baz", then additional words like "bar" and "baz" are treated as sub-ensembles. Sub-ensembles are merely parts within an ensemble; they do not have a Tcl command associated with them. An ensemble like "foo" can have a sub-ensemble called "foo bar", which in turn can have a sub-ensemble called "foo bar baz". In this case, the sub-ensemble "foo bar" must be created before the sub-ensemble "foo bar baz" that resides within it. If there are any arguments following *ensName*, then they are treated as commands, and they are executed to update the ensemble. The following commands are recognized in this context: **part** and **ensemble**. The **part** command defines a new part for the ensemble. Its syntax is identical to the usual **[proc](../tclcmd/proc.htm)** command, but it defines a part within an ensemble, instead of a Tcl command. If a part called *partName* already exists within the ensemble, then the **part** command returns an error. The **ensemble** command can be nested inside another **ensemble** command to define a sub-ensemble. What is an ensemble? -------------------- The usual "info" command is a composite command--the command name **[info](../tclcmd/info.htm)** must be followed by a sub-command like **body** or **globals**. We will refer to a command like **[info](../tclcmd/info.htm)** as an *ensemble*, and to sub-commands like **body** or **globals** as its *parts*. Ensembles can be nested. For example, the **[info](../tclcmd/info.htm)** command has an ensemble **[info namespace](../tclcmd/info.htm)** within it. This ensemble has parts like **info namespace all** and **info namespace children**. With ensembles, composite commands can be created and extended in an automatic way. Any package can find an existing ensemble and add new parts to it. So extension writers can add their own parts, for example, to the **[info](../tclcmd/info.htm)** command. The ensemble facility manages all of the part names and keeps track of unique abbreviations. Normally, you can abbreviate **[info complete](../tclcmd/info.htm)** to **[info comp](../tclcmd/info.htm)**. But if an extension adds the part **[info complexity](../tclcmd/info.htm)**, the minimum abbreviation for **[info complete](../tclcmd/info.htm)** becomes **[info complet](../tclcmd/info.htm)**. The ensemble facility not only automates the construction of composite commands, but it automates the error handling as well. If you invoke an ensemble command without specifying a part name, you get an automatically generated error message that summarizes the usage information. For example, when the **[info](../tclcmd/info.htm)** command is invoked without any arguments, it produces the following error message: ``` wrong # args: should be one of... info args procname info body procname info cmdcount info commands ?pattern? info complete command info context info default procname arg varname info exists varName info globals ?pattern? info level ?number? info library info locals ?pattern? info namespace option ?arg arg ...? info patchlevel info procs ?pattern? info protection ?-command? ?-variable? name info script info tclversion info vars ?pattern? info which ?-command? ?-variable? ?-namespace? name ``` You can also customize the way an ensemble responds to errors. When an ensemble encounters an unspecified or ambiguous part name, it looks for a part called **@error**. If it exists, then it is used to handle the error. This part will receive all of the arguments on the command line starting with the offending part name. It can find another way of resolving the command, or generate its own error message. Example ------- We could use an ensemble to clean up the syntax of the various "wait" commands in Tcl/Tk. Instead of using a series of strange commands like this: ``` vwait x tkwait visibility .top tkwait window . ``` we could use commands with a uniform syntax, like this: ``` wait variable x wait visibility .top wait window . ``` The Tcl package could define the following ensemble: ``` itcl::ensemble wait part variable {name} { uplevel vwait $name } ``` The Tk package could add some options to this ensemble, with a command like this: ``` itcl::ensemble wait { part visibility {name} { tkwait visibility $name } part window {name} { tkwait window $name } } ``` Other extensions could add their own parts to the **wait** command too. Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/ItclCmd/ensemble.htm> tcl_tk find find ==== Name ---- itcl::find — search for classes and objects Synopsis -------- **[itcl::find](find.htm)** *option* ?*arg arg ...*? Description ----------- The **find** command is used to find classes and objects that are available in the current interpreter. Classes and objects are reported first in the active namespace, then in all other namespaces in the interpreter. The *option* argument determines what action is carried out by the command. The legal *options* (which may be abbreviated) are: **find classes ?***pattern*? Returns a list of [incr Tcl] classes. Classes in the current namespace are listed first, followed by classes in all other namespaces in the interpreter. If the optional *pattern* is specified, then the reported names are compared using the rules of the "**[string match](../tclcmd/string.htm)**" command, and only matching names are reported. If a class resides in the current namespace context, this command reports its simple name--without any qualifiers. However, if the *pattern* contains **::** qualifiers, or if the class resides in another context, this command reports its fully-qualified name. Therefore, you can use the following command to obtain a list where all names are fully-qualified: ``` itcl::find classes ::* ``` **find objects ?***pattern*? ?**-class** *className*? ?**-isa** *className*? Returns a list of [incr Tcl] objects. Objects in the current namespace are listed first, followed by objects in all other namespaces in the interpreter. If the optional *pattern* is specified, then the reported names are compared using the rules of the "**[string match](../tclcmd/string.htm)**" command, and only matching names are reported. If the optional "**-class**" parameter is specified, this list is restricted to objects whose most-specific class is *className*. If the optional "**-isa**" parameter is specified, this list is further restricted to objects having the given *className* anywhere in their heritage. If an object resides in the current namespace context, this command reports its simple name--without any qualifiers. However, if the *pattern* contains **::** qualifiers, or if the object resides in another context, this command reports its fully-qualified name. Therefore, you can use the following command to obtain a list where all names are fully-qualified: ``` itcl::find objects ::* ``` Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/ItclCmd/find.htm> tcl_tk tdbc_odbc tdbc\_odbc ========== [NAME](tdbc_odbc.htm#M2) tdbc::odbc — TDBC-ODBC bridge [SYNOPSIS](tdbc_odbc.htm#M3) [DESCRIPTION](tdbc_odbc.htm#M4) [**add**](tdbc_odbc.htm#M5) [**add\_system**](tdbc_odbc.htm#M6) [**configure**](tdbc_odbc.htm#M7) [**configure\_system**](tdbc_odbc.htm#M8) [**remove**](tdbc_odbc.htm#M9) [**remove\_system**](tdbc_odbc.htm#M10) [CONNECTION OPTIONS](tdbc_odbc.htm#M11) [EXAMPLES](tdbc_odbc.htm#M12) [SEE ALSO](tdbc_odbc.htm#M13) [KEYWORDS](tdbc_odbc.htm#M14) [COPYRIGHT](tdbc_odbc.htm#M15) Name ---- tdbc::odbc — TDBC-ODBC bridge Synopsis -------- package require **tdbc::odbc 1.0** **tdbc::odbc::connection create** *db* *connectionString* ?*-option value...*? **tdbc::odbc::connection new** *connectionString* ?*-option value...*? **tdbc::odbc::datasources** ?**-system**|**-user**? **tdbc::odbc::drivers** **tdbc::odbc::datasource** *command* *driverName* ?*keyword*-*value*?... Description ----------- The **tdbc::odbc** driver provides a database interface that conforms to Tcl DataBase Connectivity (TDBC) and allows a Tcl script to connect to any SQL database presenting an ODBC interface. It is also provided as a worked example of how to write a database driver in C, so that driver authors have a starting point for further development. Connection to an ODBC database is established by invoking **tdbc::odbc::connection create**, passing it the name to be used as a connection handle, followed by a standard ODBC connection string. As an alternative, **tdbc::odbc::connection new** may be used to create a database connection with an automatically assigned name. The return value from **tdbc::odbc::connection new** is the name that was chosen for the connection handle. The connection string will include at least a **DRIVER** or **DSN** keyword, and may include others that are defined by a particular ODBC driver. (If the local ODBC system supports a graphical user interface, the **-parent** option (see below) may allow calling **tdbc::odbc::connection create** with an empty connection string.) The side effect of **tdbc::odbc::connection create** is to create a new database connection.. See **tdbc::connection(n)** for the details of how to use the connection to manipulate a database. In addition to a standard TDBC interface, **tdbc::odbc** supports three additional ccommands. The first of these, **tdbc::odbc::datasources**, which returns a Tcl list enumerating the named data sources available to the program (for connection with the **DSN** keyword in the connection string). The result of **tdbc::odbc::datasources** may be constrained to only system data sources or only user data sources by including the **-system** or **-user** options, respectively. The **tdbc::odbc::drivers** command returns a dictionary whose keys are the names of drivers available for the **DRIVER** keyword in the connection string, and whose values are descriptions of the drivers. The **tdbc::odbc::datasource** command allows configuration of named data sources on those systems that support the ODBC Installer application programming interface. It accepts a *command*, which specifies the operation to be performed, the name of a *driver* for the database in question, and a set of keyword-value pairs that are interpreted by the given driver. The *command* must be one of the following: **add** Adds a user data source. The keyword-value pairs must include at least a **DSN** option naming the data source **add\_system** Adds a system data source. The keyword-value pairs must include at least a **DSN** option naming the data source **configure** Configures a user data source. The keyword-value pairs will usually include a **DSN** option naming the data source. Some drivers will support other options, such as the **CREATE\_DB** option to the Microsoft Access driver on Windows. **configure\_system** Configures a system data source. **remove** Removes a user data source. The keyword-value pairs must include a **DSN** option specifying the data source to remove. **remove\_system** Removes a system data source. The keyword-value pairs must include a **DSN** option specifying the data source to remove. Connection options ------------------ The **tdbc::odbc::connection create** object command supports the **-encoding**, **-isolation**, **-readonly** and **-timeout** options common to all TDBC drivers. The **-encoding** option will succeed only if the requested encoding is the same as the system encoding; **tdbc::odbc** does not attempt to specify alternative encodings to an ODBC driver. (Some drivers accept encoding specifications in the connection string.) In addition, if Tk is present in the requesting interpreter, and the local system's ODBC driver manager supports a graphical user interface, the **tdbc::odbc::connection create** object command supports a **-parent** option, whose value is the path name of a Tk window. If this option is specified, and a connection string does not specify all the information needed to connect to an interface, the ODBC driver manager will display a dialog box to request whatever additional information is required. The requesting interpreter will block until the user dismisses the dialog, at which point the connection is made. Examples -------- Sincs ODBC connection strings are driver specific, it is often difficult to find the documentation needed to compose them. The following examples are known to work on most Windows systems and provide at least a few useful things that a program can do. ``` tdbc::odbc::connection create db \ "DSN={PAYROLL};UID={aladdin};PWD={Sesame}" ``` Connects to a named data source "PAYROLL", providing "aladdin" as a user name and "Sesame" as a password. Uses **db** as the name of the connection. ``` set connString {DRIVER={Microsoft Access Driver (*.mdb)};} append connString {FIL={MS Access}\;} append connString {DBQ=} \ [file nativename [file normalize $fileName]] tdbc::odbc::connection create db2 -readonly 1 $connString ``` Opens a connection to a Microsoft Access database file whose name is in *$fileName*. The database is opened in read-only mode. The resulting connection is called "db2". ``` tdbc::odbc::connection create db3 \ "DRIVER=SQLite3;DATABASE=$fileName" ``` Opens a connection to a SQLite3 database whose name is in "$fileName". ``` tdbc::odbc::datasource add \ {Microsoft Access Driver (*.mdb)} \ DSN=MyTestDatabase \ DBQ=[file native [file normalize $fileName]] ``` Creates a new user data source with the name, "MyTestDatabase" bound to a Microsoft Access file whose path name is in "$fileName". No connection is made to the data source until the program calls **tdbc::odbc::connection create**. ``` tdbc::odbc::datasource configure \ {Microsoft Access Driver (*.mdb)} \ CREATE_DB=[file native [file normalize $fileName]] \ General ``` Creates a new, empty Microsoft Access database in the file identified by "$fileName". No connection is made to the database until the program calls **tdbc::odbc::connection create**. See also -------- **[tdbc](../tdbccmd/tdbc.htm)**, **[tdbc::connection](../tdbccmd/tdbc_connection.htm)**, **[tdbc::resultset](../tdbccmd/tdbc_resultset.htm)**, **[tdbc::statement](../tdbccmd/tdbc_statement.htm)** Copyright --------- Copyright (c) 2008 by Kevin B. Kenny.
programming_docs
tcl_tk tdbc_mysql tdbc\_mysql =========== [NAME](tdbc_mysql.htm#M2) tdbc::mysql — TDBC-MYSQL bridge [SYNOPSIS](tdbc_mysql.htm#M3) [DESCRIPTION](tdbc_mysql.htm#M4) [CONNECTION OPTIONS](tdbc_mysql.htm#M5) [**-host** *hostname*](tdbc_mysql.htm#M6) [**-port** *number*](tdbc_mysql.htm#M7) [**-socket** *path*](tdbc_mysql.htm#M8) [**-user** *name*](tdbc_mysql.htm#M9) [**-passwd** *password*](tdbc_mysql.htm#M10) [**-password** *password*](tdbc_mysql.htm#M11) [**-database** *name*](tdbc_mysql.htm#M12) [**-db** *name*](tdbc_mysql.htm#M13) [**-interactive** *flag*](tdbc_mysql.htm#M14) [**-ssl\_ca** *string*](tdbc_mysql.htm#M15) [**-ssl\_capath** *string*](tdbc_mysql.htm#M16) [**-ssl\_cert** *string*](tdbc_mysql.htm#M17) [**-ssl\_cipher** *string*](tdbc_mysql.htm#M18) [**-ssl\_key** *string*](tdbc_mysql.htm#M19) [EXAMPLES](tdbc_mysql.htm#M20) [ADDITIONAL CONNECTION METHODS](tdbc_mysql.htm#M21) [*$connection* **evaldirect** *sqlStatement*](tdbc_mysql.htm#M22) [SEE ALSO](tdbc_mysql.htm#M23) [KEYWORDS](tdbc_mysql.htm#M24) [COPYRIGHT](tdbc_mysql.htm#M25) Name ---- tdbc::mysql — TDBC-MYSQL bridge Synopsis -------- package require **tdbc::mysql 1.0** **tdbc::mysql::connection create** *db* ?*-option value...*? **tdbc::mysql::connection new** ?*-option value...*? **tdbc::mysql::datasources** ?**-system**|**-user**? **tdbc::mysql::drivers** **tdbc::mysql::datasource** *command* *driverName* ?*keyword*-*value*?... Description ----------- The **tdbc::mysql** driver provides a database interface that conforms to Tcl DataBase Connectivity (TDBC) and allows a Tcl script to connect to a MySQL database. Connection to an MYSQL database is established by invoking **tdbc::mysql::connection create**, passing it the name to give the database handle and a set of *-option-value* pairs. The available options are enumerated under CONNECTION OPTIONS below. As an alternative, **tdbc::mysql::connection new** may be used to create a database connection with an automatically assigned name. The return value from **tdbc::mysql::connection new** is the name that was chosen for the connection handle. The side effect of **tdbc::mysql::connection create** is to create a new database connection.. See **tdbc::connection(n)** for the details of how to use the connection to manipulate a database. Connection options ------------------ The **tdbc::mysql::connection create** object command supports the **-encoding**, **-isolation**, **-readonly** and **-timeout** options common to all TDBC drivers. The **-encoding** option will always fail unless the encoding is **utf-8**; the database connection always uses UTF-8 encoding to be able to transfer arbitrary Unicode characters. The **-readonly** option must be **0**, because MySQL does not offer read-only connections. In addition, the following options are recognized: **-host** *hostname* Connects to the host specified by *hostname*. This option must be set on the initial creation of the connection; it cannot be changed after connecting. Default is to connect to the local host. **-port** *number* Connects to a MySQL server listening on the port specified by *number*. This option may not be changed after connecting. It is used only when *host* is specified and is not **localhost**. **-socket** *path* Connects to a MySQL server listening on the Unix socket or named pipe specified by *path* . This option may not be changed after connecting. It is used only when *-host* is not specified or is **localhost**. **-user** *name* Presents *name* as the user name to the MySQL server. Default is the current user ID. **-passwd** *password* **-password** *password* These two options are synonymous. They present the given *password* as the user's password to the MySQL server. Default is not to present a password. **-database** *name* **-db** *name* These two options are synonymous. They present the given *name* as the name of the default database to use in MySQL queries. If not specified, the default database for the current user is used. **-interactive** *flag* The *flag* value must be a Boolean value. If it is **true** (or any equivalent), the default timeout is set for an interactive user, otherwise, the default timeout is set for a batch user. This option is meaningful only on initial connection. When using the **configure** method on a MySQL connection use the **-timeout** option to set the timeout desired. **-ssl\_ca** *string* **-ssl\_capath** *string* **-ssl\_cert** *string* **-ssl\_cipher** *string* **-ssl\_key** *string* These five options set the certificate authority, certificate authority search path, SSL certificate, transfer cipher, and SSL key to the given *string* arguments. These options may be specified only on initial connection to a database, not in the **configure** method of an existing connection. Default is not to use SSL. Examples -------- ``` tdbc::mysql::connection -user joe -passwd sesame -db joes_database ``` Connects to the MySQL server on the local host using the default connection method, presenting user ID 'joe' and password 'sesame'. Uses 'joes\_database' as the default database name. Additional connection methods ----------------------------- In addition to the usual methods on the tdbc::connection(n) object, connections to a MySQL database support one additional method: *$connection* **evaldirect** *sqlStatement* This method takes the given *sqlStatement* and interprets as MySQL native SQL code and evaluates it without preparing it. The statement may not contain variable substitutions. The result set is returned as a list of lists, with each sublist being the list of columns of a result row formatted as character strings. Note that the string formatting is done by MySQL and not by Tcl, so details like the appearance of floating point numbers may differ. *This command is not recommended* for anything where the usual *prepare* or *preparecall* methods work correctly. It is provided so that data management language statements that are not implemented in MySQL's prepared statement API, such as **CREATE DATABASE** or **CREATE PROCEDURE**, can be executed. See also -------- **[tdbc](../tdbccmd/tdbc.htm)**, **[tdbc::connection](../tdbccmd/tdbc_connection.htm)**, **[tdbc::resultset](../tdbccmd/tdbc_resultset.htm)**, **[tdbc::statement](../tdbccmd/tdbc_statement.htm)** Copyright --------- Copyright (c) 2009 by Kevin B. Kenny. tcl_tk TdbcpostgresCmd TdbcpostgresCmd =============== | | | --- | | [tdbc::postgres](tdbc_postgres.htm "TDBC-POSTGRES bridge") | tcl_tk tdbc_postgres tdbc\_postgres ============== [NAME](tdbc_postgres.htm#M2) tdbc::postgres — TDBC-POSTGRES bridge [SYNOPSIS](tdbc_postgres.htm#M3) [DESCRIPTION](tdbc_postgres.htm#M4) [CONNECTION OPTIONS](tdbc_postgres.htm#M5) [**-host** *hostname*](tdbc_postgres.htm#M6) [**-hostaddr** *address*](tdbc_postgres.htm#M7) [**-port** *number*](tdbc_postgres.htm#M8) [**-user** *name*](tdbc_postgres.htm#M9) [**-passwd** *password*](tdbc_postgres.htm#M10) [**-password** *password*](tdbc_postgres.htm#M11) [**-database** *name*](tdbc_postgres.htm#M12) [**-db** *name*](tdbc_postgres.htm#M13) [**-options** *opts*](tdbc_postgres.htm#M14) [**-tty** *file*](tdbc_postgres.htm#M15) [**-sslmode** *mode*](tdbc_postgres.htm#M16) [**-requiressl** *flag*](tdbc_postgres.htm#M17) [**-service** *name*](tdbc_postgres.htm#M18) [EXAMPLES](tdbc_postgres.htm#M19) [SEE ALSO](tdbc_postgres.htm#M20) [KEYWORDS](tdbc_postgres.htm#M21) [COPYRIGHT](tdbc_postgres.htm#M22) Name ---- tdbc::postgres — TDBC-POSTGRES bridge Synopsis -------- package require **tdbc::postgres 1.0** **tdbc::postgres::connection create** *db* ?*-option value...*? **tdbc::postgres::connection new** ?*-option value...*? Description ----------- The **tdbc::postgres** driver provides a database interface that conforms to Tcl DataBase Connectivity (TDBC) and allows a Tcl script to connect to a Postgres database. Connection to a POSTGRES database is established by invoking **tdbc::postgres::connection create**, passing it the name to give the database handle and a set of *-option-value* pairs. The available options are enumerated under CONNECTION OPTIONS below. As an alternative, **tdbc::postgres::connection new** may be used to create a database connection with an automatically assigned name. The return value from **tdbc::postgres::connection new** is the name that was chosen for the connection handle. The side effect of **tdbc::postgres::connection create** is to create a new database connection.. See **tdbc::connection(n)** for the details of how to use the connection to manipulate a database. Connection options ------------------ The **tdbc::postgres::connection create** object command supports the **-encoding**, **-isolation**, **-readonly** and **-timeout** options common to all TDBC drivers. The **-timeout** option will only affect connection process, once connected this value will be ignored and cannot be changed after connecting. In addition, the following options are recognized (these options must be set on the initial creation of the connection; they cannot be changed after connecting) : **-host** *hostname* Connects to the host specified by *hostname*. Default is to connect using a local Unix domain socket. **-hostaddr** *address* Connects to the host specified by given IP *address*. If both **-host** and **-hostaddr** are given, the value of **-host** is ignored. Default is to connect using a local Unix domain socket. **-port** *number* Connects to a Postgres server listening on the port specified by *number*. It is used only when *host* or *hostaddr* is specified. **-user** *name* Presents *name* as the user name to the Postgres server. Default is the current user ID. **-passwd** *password* **-password** *password* These two options are synonymous. They present the given *password* as the user's password to the Postgres server. Default is not to present a password. **-database** *name* **-db** *name* These two options are synonymous. They present the given *name* as the name of the default database to use in Postgres queries. If not specified, the default database for the current user is used. **-options** *opts* This sets *opts* as additional command line options send to the server. **-tty** *file* This option is ignored on never servers. Formerly this specified where to send debug output. This option is left for compatibility with older servers. **-sslmode** *mode* This option determines whether or with what priority an SSL connection will be negotiated with the server. There are four *modes*: **disable** will attempt only an unencrypted SSL connection; **allow** will negotiate, trying first a non-SSL connection, then if that fails, trying an SSL connection; **prefer** (the default) will negotiate, trying first an SSL connection, then if that fails, trying a regular non-SSL connection; **require** will try only an SSL connection. If PostgreSQL is compiled without SSL support, using option **require** will cause an error, and options **allow** and **prefer** will be tolerated but the driver will be unable to negotiate an SSL connection. **-requiressl** *flag* This option is deprecated in favor of the **-sslmode** setting. The *flag* value must be a Boolean value. If it is **true** (or any equivalent), driver will then refuse to connect if the server does not accept an SSL connection. The default value is **false** (or any equivalent), and acts the same like **-sslmode** **preffered** **-service** *name* It specifies a service *name* in pg\_service.conf file that holds additional connection parameters. This allows applications to specify only a service name so connection parameters can be centrally maintained. Refer to PostgreSQL Documentation or PREFIX/share/pg\_service.conf.sample file for details. Examples -------- ``` tdbc::postgres::connection -user joe -passwd sesame -db joes_database ``` Connects to the Postgres server on the local host using the default connection method, presenting user ID 'joe' and password 'sesame'. Uses 'joes\_database' as the default database name. See also -------- **[tdbc](../tdbccmd/tdbc.htm)**, **[tdbc::connection](../tdbccmd/tdbc_connection.htm)**, **[tdbc::resultset](../tdbccmd/tdbc_resultset.htm)**, **[tdbc::statement](../tdbccmd/tdbc_statement.htm)** Copyright --------- Copyright (c) 2009 by Slawomir Cygan tcl_tk grid grid ==== [NAME](grid.htm#M2) grid — Geometry manager that arranges widgets in a grid [SYNOPSIS](grid.htm#M3) [DESCRIPTION](grid.htm#M4) [**grid** *slave* ?*slave ...*? ?*options*?](grid.htm#M5) [**grid anchor** *master* ?*anchor*?](grid.htm#M6) [**grid bbox** *master* ?*column row*? ?*column2 row2*?](grid.htm#M7) [**grid columnconfigure** *master index* ?*-option value...*?](grid.htm#M8) [**grid configure** *slave* ?*slave ...*? ?*options*?](grid.htm#M9) [**-column** *n*](grid.htm#M10) [**-columnspan** *n*](grid.htm#M11) [**-in** *other*](grid.htm#M12) [**-ipadx** *amount*](grid.htm#M13) [**-ipady** *amount*](grid.htm#M14) [**-padx** *amount*](grid.htm#M15) [**-pady** *amount*](grid.htm#M16) [**-row** *n*](grid.htm#M17) [**-rowspan** *n*](grid.htm#M18) [**-sticky** *style*](grid.htm#M19) [**grid forget** *slave* ?*slave ...*?](grid.htm#M20) [**grid info** *slave*](grid.htm#M21) [**grid location** *master x y*](grid.htm#M22) [**grid propagate** *master* ?*boolean*?](grid.htm#M23) [**grid rowconfigure** *master index* ?*-option value...*?](grid.htm#M24) [**grid remove** *slave* ?*slave ...*?](grid.htm#M25) [**grid size** *master*](grid.htm#M26) [**grid slaves** *master* ?*-option value*?](grid.htm#M27) [RELATIVE PLACEMENT](grid.htm#M28) [**-**](grid.htm#M29) [**x**](grid.htm#M30) [**^**](grid.htm#M31) [THE GRID ALGORITHM](grid.htm#M32) [GEOMETRY PROPAGATION](grid.htm#M33) [RESTRICTIONS ON MASTER WINDOWS](grid.htm#M34) [STACKING ORDER](grid.htm#M35) [CREDITS](grid.htm#M36) [EXAMPLES](grid.htm#M37) [SEE ALSO](grid.htm#M38) [KEYWORDS](grid.htm#M39) Name ---- grid — Geometry manager that arranges widgets in a grid Synopsis -------- **grid** *option arg* ?*arg ...*? Description ----------- The **grid** command is used to communicate with the grid geometry manager that arranges widgets in rows and columns inside of another window, called the geometry master (or master window). The **grid** command can have any of several forms, depending on the *option* argument: **grid** *slave* ?*slave ...*? ?*options*? If the first argument to **grid** is suitable as the first slave argument to **grid configure**, either a window name (any value starting with **.**) or one of the characters **x** or **^** (see the **[RELATIVE PLACEMENT](#M28)** section below), then the command is processed in the same way as **grid configure**. **grid anchor** *master* ?*anchor*? The anchor value controls how to place the grid within the master when no row/column has any weight. See **[THE GRID ALGORITHM](#M32)** below for further details. The default *anchor* is *nw*. **grid bbox** *master* ?*column row*? ?*column2 row2*? With no arguments, the bounding box (in pixels) of the grid is returned. The return value consists of 4 integers. The first two are the pixel offset from the master window (x then y) of the top-left corner of the grid, and the second two integers are the width and height of the grid, also in pixels. If a single *column* and *row* is specified on the command line, then the bounding box for that cell is returned, where the top left cell is numbered from zero. If both *column* and *row* arguments are specified, then the bounding box spanning the rows and columns indicated is returned. **grid columnconfigure** *master index* ?*-option value...*? Query or set the column properties of the *index* column of the geometry master, *master*. The valid options are **-minsize**, **-weight**, **-uniform** and **-pad**. If one or more options are provided, then *index* may be given as a list of column indices to which the configuration options will operate on. Indices may be integers, window names or the keyword *all*. For *all* the options apply to all columns currently occupied be slave windows. For a window name, that window must be a slave of this master and the options apply to all columns currently occupied be the slave. The **-minsize** option sets the minimum size, in screen units, that will be permitted for this column. The **-weight** option (an integer value) sets the relative weight for apportioning any extra spaces among columns. A weight of zero (0) indicates the column will not deviate from its requested size. A column whose weight is two will grow at twice the rate as a column of weight one when extra space is allocated to the layout. The **-uniform** option, when a non-empty value is supplied, places the column in a *uniform group* with other columns that have the same value for **-uniform**. The space for columns belonging to a uniform group is allocated so that their sizes are always in strict proportion to their **-weight** values. See **[THE GRID ALGORITHM](#M32)** below for further details. The **-pad** option specifies the number of screen units that will be added to the largest window contained completely in that column when the grid geometry manager requests a size from the containing window. If only an option is specified, with no value, the current value of that option is returned. If only the master window and index is specified, all the current settings are returned in a list of “-option value” pairs. **grid configure** *slave* ?*slave ...*? ?*options*? The arguments consist of the names of one or more slave windows followed by pairs of arguments that specify how to manage the slaves. The characters **-**, **x** and **^**, can be specified instead of a window name to alter the default location of a *slave*, as described in the **[RELATIVE PLACEMENT](#M28)** section, below. The following options are supported: **-column** *n* Insert the slave so that it occupies the *n*th column in the grid. Column numbers start with 0. If this option is not supplied, then the slave is arranged just to the right of previous slave specified on this call to **grid**, or column “0” if it is the first slave. For each **x** that immediately precedes the *slave*, the column position is incremented by one. Thus the **x** represents a blank column for this row in the grid. **-columnspan** *n* Insert the slave so that it occupies *n* columns in the grid. The default is one column, unless the window name is followed by a **-**, in which case the columnspan is incremented once for each immediately following **-**. **-in** *other* Insert the slave(s) in the master window given by *other*. The default is the first slave's parent window. **-ipadx** *amount* The *amount* specifies how much horizontal internal padding to leave on each side of the slave(s). This is space is added inside the slave(s) border. The *amount* must be a valid screen distance, such as **2** or **.5c**. It defaults to 0. **-ipady** *amount* The *amount* specifies how much vertical internal padding to leave on the top and bottom of the slave(s). This space is added inside the slave(s) border. The *amount* defaults to 0. **-padx** *amount* The *amount* specifies how much horizontal external padding to leave on each side of the slave(s), in screen units. *Amount* may be a list of two values to specify padding for left and right separately. The *amount* defaults to 0. This space is added outside the slave(s) border. **-pady** *amount* The *amount* specifies how much vertical external padding to leave on the top and bottom of the slave(s), in screen units. *Amount* may be a list of two values to specify padding for top and bottom separately. The *amount* defaults to 0. This space is added outside the slave(s) border. **-row** *n* Insert the slave so that it occupies the *n*th row in the grid. Row numbers start with 0. If this option is not supplied, then the slave is arranged on the same row as the previous slave specified on this call to **grid**, or the first unoccupied row if this is the first slave. **-rowspan** *n* Insert the slave so that it occupies *n* rows in the grid. The default is one row. If the next **grid** command contains **^** characters instead of *slaves* that line up with the columns of this *slave*, then the **rowspan** of this *slave* is extended by one. **-sticky** *style* If a slave's cell is larger than its requested dimensions, this option may be used to position (or stretch) the slave within its cell. *Style* is a string that contains zero or more of the characters **n**, **s**, **e** or **w**. The string can optionally contains spaces or commas, but they are ignored. Each letter refers to a side (north, south, east, or west) that the slave will “stick” to. If both **n** and **s** (or **e** and **w**) are specified, the slave will be stretched to fill the entire height (or width) of its cavity. The **-sticky** option subsumes the combination of **-anchor** and **-fill** that is used by **[pack](pack.htm)**. The default is “”, which causes the slave to be centered in its cavity, at its requested size. If any of the slaves are already managed by the geometry manager then any unspecified options for them retain their previous values rather than receiving default values. **grid forget** *slave* ?*slave ...*? Removes each of the *slave*s from grid for its master and unmaps their windows. The slaves will no longer be managed by the grid geometry manager. The configuration options for that window are forgotten, so that if the slave is managed once more by the grid geometry manager, the initial default settings are used. **grid info** *slave* Returns a list whose elements are the current configuration state of the slave given by *slave* in the same option-value form that might be specified to **grid configure**. The first two elements of the list are “**-in** *master*” where *master* is the slave's master. **grid location** *master x y* Given *x* and *y* values in screen units relative to the master window, the column and row number at that *x* and *y* location is returned. For locations that are above or to the left of the grid, **-1** is returned. **grid propagate** *master* ?*boolean*? If *boolean* has a true boolean value such as **1** or **on** then propagation is enabled for *master*, which must be a window name (see **[GEOMETRY PROPAGATION](#M33)** below). If *boolean* has a false boolean value then propagation is disabled for *master*. In either of these cases an empty string is returned. If *boolean* is omitted then the command returns **0** or **1** to indicate whether propagation is currently enabled for *master*. Propagation is enabled by default. **grid rowconfigure** *master index* ?*-option value...*? Query or set the row properties of the *index* row of the geometry master, *master*. The valid options are **-minsize**, **-weight**, **-uniform** and **-pad**. If one or more options are provided, then *index* may be given as a list of row indices to which the configuration options will operate on. Indices may be integers, window names or the keyword *all*. For *all* the options apply to all rows currently occupied be slave windows. For a window name, that window must be a slave of this master and the options apply to all rows currently occupied be the slave. The **-minsize** option sets the minimum size, in screen units, that will be permitted for this row. The **-weight** option (an integer value) sets the relative weight for apportioning any extra spaces among rows. A weight of zero (0) indicates the row will not deviate from its requested size. A row whose weight is two will grow at twice the rate as a row of weight one when extra space is allocated to the layout. The **-uniform** option, when a non-empty value is supplied, places the row in a *uniform group* with other rows that have the same value for **-uniform**. The space for rows belonging to a uniform group is allocated so that their sizes are always in strict proportion to their **-weight** values. See **[THE GRID ALGORITHM](#M32)** below for further details. The **-pad** option specifies the number of screen units that will be added to the largest window contained completely in that row when the grid geometry manager requests a size from the containing window. If only an option is specified, with no value, the current value of that option is returned. If only the master window and index is specified, all the current settings are returned in a list of “-option value” pairs. **grid remove** *slave* ?*slave ...*? Removes each of the *slave*s from grid for its master and unmaps their windows. The slaves will no longer be managed by the grid geometry manager. However, the configuration options for that window are remembered, so that if the slave is managed once more by the grid geometry manager, the previous values are retained. **grid size** *master* Returns the size of the grid (in columns then rows) for *master*. The size is determined either by the *slave* occupying the largest row or column, or the largest column or row with a **-minsize**, **-weight**, or **-pad** that is non-zero. **grid slaves** *master* ?*-option value*? If no options are supplied, a list of all of the slaves in *master* are returned, most recently manages first. *Option* can be either **-row** or **-column** which causes only the slaves in the row (or column) specified by *value* to be returned. Relative placement ------------------ The **grid** command contains a limited set of capabilities that permit layouts to be created without specifying the row and column information for each slave. This permits slaves to be rearranged, added, or removed without the need to explicitly specify row and column information. When no column or row information is specified for a *slave*, default values are chosen for **-column**, **-row**, **-columnspan** and **-rowspan** at the time the *slave* is managed. The values are chosen based upon the current layout of the grid, the position of the *slave* relative to other *slave*s in the same grid command, and the presence of the characters **-**, **x**, and **^** in **grid** command where *slave* names are normally expected. **-** This increases the **-columnspan** of the *slave* to the left. Several **-**'s in a row will successively increase the number of columns spanned. A **-** may not follow a **^** or a **x**, nor may it be the first *slave* argument to **grid configure**. **x** This leaves an empty column between the *slave* on the left and the *slave* on the right. **^** This extends the **-rowspan** of the *slave* above the **^**'s in the grid. The number of **^**'s in a row must match the number of columns spanned by the *slave* above it. The grid algorithm ------------------ The grid geometry manager lays out its slaves in three steps. In the first step, the minimum size needed to fit all of the slaves is computed, then (if propagation is turned on), a request is made of the master window to become that size. In the second step, the requested size is compared against the actual size of the master. If the sizes are different, then spaces is added to or taken away from the layout as needed. For the final step, each slave is positioned in its row(s) and column(s) based on the setting of its *sticky* flag. To compute the minimum size of a layout, the grid geometry manager first looks at all slaves whose **-columnspan** and **-rowspan** values are one, and computes the nominal size of each row or column to be either the *minsize* for that row or column, or the sum of the *pad*ding plus the size of the largest slave, whichever is greater. After that the rows or columns in each uniform group adapt to each other. Then the slaves whose row-spans or column-spans are greater than one are examined. If a group of rows or columns need to be increased in size in order to accommodate these slaves, then extra space is added to each row or column in the group according to its *weight*. For each group whose weights are all zero, the additional space is apportioned equally. When multiple rows or columns belong to a uniform group, the space allocated to them is always in proportion to their weights. (A weight of zero is considered to be 1.) In other words, a row or column configured with **-weight 1 -uniform a** will have exactly the same size as any other row or column configured with **-weight 1 -uniform a**. A row or column configured with **-weight 2 -uniform b** will be exactly twice as large as one that is configured with **-weight 1 -uniform b**. More technically, each row or column in the group will have a size equal to *k\*weight* for some constant *k*. The constant *k* is chosen so that no row or column becomes smaller than its minimum size. For example, if all rows or columns in a group have the same weight, then each row or column will have the same size as the largest row or column in the group. For masters whose size is larger than the requested layout, the additional space is apportioned according to the row and column weights. If all of the weights are zero, the layout is placed within its master according to the *anchor* value. For masters whose size is smaller than the requested layout, space is taken away from columns and rows according to their weights. However, once a column or row shrinks to its minsize, its weight is taken to be zero. If more space needs to be removed from a layout than would be permitted, as when all the rows or columns are at their minimum sizes, the layout is placed and clipped according to the *anchor* value. Geometry propagation -------------------- The grid geometry manager normally computes how large a master must be to just exactly meet the needs of its slaves, and it sets the requested width and height of the master to these dimensions. This causes geometry information to propagate up through a window hierarchy to a top-level window so that the entire sub-tree sizes itself to fit the needs of the leaf windows. However, the **grid propagate** command may be used to turn off propagation for one or more masters. If propagation is disabled then grid will not set the requested width and height of the master window. This may be useful if, for example, you wish for a master window to have a fixed size that you specify. Restrictions on master windows ------------------------------ The master for each slave must either be the slave's parent (the default) or a descendant of the slave's parent. This restriction is necessary to guarantee that the slave can be placed over any part of its master that is visible without danger of the slave being clipped by its parent. In addition, all slaves in one call to **grid** must have the same master. Stacking order -------------- If the master for a slave is not its parent then you must make sure that the slave is higher in the stacking order than the master. Otherwise the master will obscure the slave and it will appear as if the slave has not been managed correctly. The easiest way to make sure the slave is higher than the master is to create the master window first: the most recently created window will be highest in the stacking order. Credits ------- The **grid** command is based on ideas taken from the *GridBag* geometry manager written by Doug. Stein, and the **blt\_table** geometry manager, written by George Howlett. Examples -------- A toplevel window containing a text widget and two scrollbars: ``` # Make the widgets toplevel .t text .t.txt -wrap none -xscroll {.t.h set} -yscroll {.t.v set} scrollbar .t.v -orient vertical -command {.t.txt yview} scrollbar .t.h -orient horizontal -command {.t.txt xview} # Lay them out **grid** .t.txt .t.v -sticky nsew **grid** .t.h -sticky nsew # Tell the text widget to take all the extra room **grid rowconfigure** .t .t.txt -weight 1 **grid columnconfigure** .t .t.txt -weight 1 ``` Three widgets of equal width, despite their different “natural” widths: ``` button .b -text "Foo" entry .e -variable foo label .l -text "This is a fairly long piece of text" **grid** .b .e .l -sticky ew **grid columnconfigure** . "all" -uniform allTheSame ``` See also -------- **[pack](pack.htm)**, **[place](place.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/grid.htm>
programming_docs
tcl_tk pack pack ==== [NAME](pack.htm#M2) pack — Geometry manager that packs around edges of cavity [SYNOPSIS](pack.htm#M3) [DESCRIPTION](pack.htm#M4) [**pack** *slave* ?*slave ...*? ?*options*?](pack.htm#M5) [**pack configure** *slave* ?*slave ...*? ?*options*?](pack.htm#M6) [**-after** *other*](pack.htm#M7) [**-anchor** *anchor*](pack.htm#M8) [**-before** *other*](pack.htm#M9) [**-expand** *boolean*](pack.htm#M10) [**-fill** *style*](pack.htm#M11) [**none**](pack.htm#M12) [**x**](pack.htm#M13) [**y**](pack.htm#M14) [**both**](pack.htm#M15) [**-in** *other*](pack.htm#M16) [**-ipadx** *amount*](pack.htm#M17) [**-ipady** *amount*](pack.htm#M18) [**-padx** *amount*](pack.htm#M19) [**-pady** *amount*](pack.htm#M20) [**-side** *side*](pack.htm#M21) [**pack forget** *slave* ?*slave ...*?](pack.htm#M22) [**pack info** *slave*](pack.htm#M23) [**pack propagate** *master* ?*boolean*?](pack.htm#M24) [**pack slaves** *master*](pack.htm#M25) [THE PACKER ALGORITHM](pack.htm#M26) [EXPANSION](pack.htm#M27) [GEOMETRY PROPAGATION](pack.htm#M28) [RESTRICTIONS ON MASTER WINDOWS](pack.htm#M29) [PACKING ORDER](pack.htm#M30) [EXAMPLE](pack.htm#M31) [SEE ALSO](pack.htm#M32) [KEYWORDS](pack.htm#M33) Name ---- pack — Geometry manager that packs around edges of cavity Synopsis -------- **pack** *option arg* ?*arg ...*? Description ----------- The **pack** command is used to communicate with the packer, a geometry manager that arranges the children of a parent by packing them in order around the edges of the parent. The **pack** command can have any of several forms, depending on the *option* argument: **pack** *slave* ?*slave ...*? ?*options*? If the first argument to **pack** is a window name (any value starting with “.”), then the command is processed in the same way as **pack configure**. **pack configure** *slave* ?*slave ...*? ?*options*? The arguments consist of the names of one or more slave windows followed by pairs of arguments that specify how to manage the slaves. See **[THE PACKER ALGORITHM](#M26)** below for details on how the options are used by the packer. The following options are supported: **-after** *other* *Other* must the name of another window. Use its master as the master for the slaves, and insert the slaves just after *other* in the packing order. **-anchor** *anchor* *Anchor* must be a valid anchor position such as **n** or **sw**; it specifies where to position each slave in its parcel. Defaults to **center**. **-before** *other* *Other* must the name of another window. Use its master as the master for the slaves, and insert the slaves just before *other* in the packing order. **-expand** *boolean* Specifies whether the slaves should be expanded to consume extra space in their master. *Boolean* may have any proper boolean value, such as **1** or **no**. Defaults to 0. **-fill** *style* If a slave's parcel is larger than its requested dimensions, this option may be used to stretch the slave. *Style* must have one of the following values: **none** Give the slave its requested dimensions plus any internal padding requested with **-ipadx** or **-ipady**. This is the default. **x** Stretch the slave horizontally to fill the entire width of its parcel (except leave external padding as specified by **-padx**). **y** Stretch the slave vertically to fill the entire height of its parcel (except leave external padding as specified by **-pady**). **both** Stretch the slave both horizontally and vertically. **-in** *other* Insert the slave(s) at the end of the packing order for the master window given by *other*. **-ipadx** *amount* *Amount* specifies how much horizontal internal padding to leave on each side of the slave(s). *Amount* must be a valid screen distance, such as **2** or **.5c**. It defaults to 0. **-ipady** *amount* *Amount* specifies how much vertical internal padding to leave on each side of the slave(s). *Amount* defaults to 0. **-padx** *amount* *Amount* specifies how much horizontal external padding to leave on each side of the slave(s). *Amount* may be a list of two values to specify padding for left and right separately. *Amount* defaults to 0. **-pady** *amount* *Amount* specifies how much vertical external padding to leave on each side of the slave(s). *Amount* may be a list of two values to specify padding for top and bottom separately. *Amount* defaults to 0. **-side** *side* Specifies which side of the master the slave(s) will be packed against. Must be **left**, **right**, **top**, or **bottom**. Defaults to **top**. If no **-in**, **-after** or **-before** option is specified then each of the slaves will be inserted at the end of the packing list for its parent unless it is already managed by the packer (in which case it will be left where it is). If one of these options is specified then all the slaves will be inserted at the specified point. If any of the slaves are already managed by the geometry manager then any unspecified options for them retain their previous values rather than receiving default values. **pack forget** *slave* ?*slave ...*? Removes each of the *slave*s from the packing order for its master and unmaps their windows. The slaves will no longer be managed by the packer. **pack info** *slave* Returns a list whose elements are the current configuration state of the slave given by *slave* in the same option-value form that might be specified to **pack configure**. The first two elements of the list are “**-in** *master*” where *master* is the slave's master. **pack propagate** *master* ?*boolean*? If *boolean* has a true boolean value such as **1** or **on** then propagation is enabled for *master*, which must be a window name (see **[GEOMETRY PROPAGATION](#M28)** below). If *boolean* has a false boolean value then propagation is disabled for *master*. In either of these cases an empty string is returned. If *boolean* is omitted then the command returns **0** or **1** to indicate whether propagation is currently enabled for *master*. Propagation is enabled by default. **pack slaves** *master* Returns a list of all of the slaves in the packing order for *master*. The order of the slaves in the list is the same as their order in the packing order. If *master* has no slaves then an empty string is returned. The packer algorithm -------------------- For each master the packer maintains an ordered list of slaves called the *packing list*. The **-in**, **-after**, and **-before** configuration options are used to specify the master for each slave and the slave's position in the packing list. If none of these options is given for a slave then the slave is added to the end of the packing list for its parent. The packer arranges the slaves for a master by scanning the packing list in order. At the time it processes each slave, a rectangular area within the master is still unallocated. This area is called the *cavity*; for the first slave it is the entire area of the master. For each slave the packer carries out the following steps: 1. The packer allocates a rectangular *parcel* for the slave along the side of the cavity given by the slave's **-side** option. If the side is top or bottom then the width of the parcel is the width of the cavity and its height is the requested height of the slave plus the **-ipady** and **-pady** options. For the left or right side the height of the parcel is the height of the cavity and the width is the requested width of the slave plus the **-ipadx** and **-padx** options. The parcel may be enlarged further because of the **-expand** option (see **[EXPANSION](#M27)** below) 2. The packer chooses the dimensions of the slave. The width will normally be the slave's requested width plus twice its **-ipadx** option and the height will normally be the slave's requested height plus twice its **-ipady** option. However, if the **-fill** option is **x** or **both** then the width of the slave is expanded to fill the width of the parcel, minus twice the **-padx** option. If the **-fill** option is **y** or **both** then the height of the slave is expanded to fill the width of the parcel, minus twice the **-pady** option. 3. The packer positions the slave over its parcel. If the slave is smaller than the parcel then the **-anchor** option determines where in the parcel the slave will be placed. If **-padx** or **-pady** is non-zero, then the given amount of external padding will always be left between the slave and the edges of the parcel. Once a given slave has been packed, the area of its parcel is subtracted from the cavity, leaving a smaller rectangular cavity for the next slave. If a slave does not use all of its parcel, the unused space in the parcel will not be used by subsequent slaves. If the cavity should become too small to meet the needs of a slave then the slave will be given whatever space is left in the cavity. If the cavity shrinks to zero size, then all remaining slaves on the packing list will be unmapped from the screen until the master window becomes large enough to hold them again. ### Expansion If a master window is so large that there will be extra space left over after all of its slaves have been packed, then the extra space is distributed uniformly among all of the slaves for which the **-expand** option is set. Extra horizontal space is distributed among the expandable slaves whose **-side** is **left** or **right**, and extra vertical space is distributed among the expandable slaves whose **-side** is **top** or **bottom**. ### Geometry propagation The packer normally computes how large a master must be to just exactly meet the needs of its slaves, and it sets the requested width and height of the master to these dimensions. This causes geometry information to propagate up through a window hierarchy to a top-level window so that the entire sub-tree sizes itself to fit the needs of the leaf windows. However, the **pack propagate** command may be used to turn off propagation for one or more masters. If propagation is disabled then the packer will not set the requested width and height of the packer. This may be useful if, for example, you wish for a master window to have a fixed size that you specify. Restrictions on master windows ------------------------------ The master for each slave must either be the slave's parent (the default) or a descendant of the slave's parent. This restriction is necessary to guarantee that the slave can be placed over any part of its master that is visible without danger of the slave being clipped by its parent. Packing order ------------- If the master for a slave is not its parent then you must make sure that the slave is higher in the stacking order than the master. Otherwise the master will obscure the slave and it will appear as if the slave has not been packed correctly. The easiest way to make sure the slave is higher than the master is to create the master window first: the most recently created window will be highest in the stacking order. Or, you can use the **[raise](raise.htm)** and **[lower](lower.htm)** commands to change the stacking order of either the master or the slave. Example ------- ``` # Make the widgets label .t -text "This widget is at the top" -bg red label .b -text "This widget is at the bottom" -bg green label .l -text "Left\nHand\nSide" label .r -text "Right\nHand\nSide" text .mid .mid insert end "This layout is like Java's BorderLayout" # Lay them out **pack** .t -side top -fill x **pack** .b -side bottom -fill x **pack** .l -side left -fill y **pack** .r -side right -fill y **pack** .mid -expand 1 -fill both ``` See also -------- **[grid](grid.htm)**, **[place](place.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/pack.htm> tcl_tk TkCmd TkCmd ===== | | | | | | | --- | --- | --- | --- | --- | | [bell](bell.htm "Ring a display's bell") | [grab](grab.htm "Confine pointer and keyboard events to a window sub-tree") | [scale](scale.htm "Create and manipulate 'scale' value-controlled slider widgets") | [tk\_optionMenu](optionmenu.htm "Create an option menubutton and its menu") | [ttk::menubutton](ttk_menubutton.htm "Widget that pops down a menu when pressed") | | [bind](bind.htm "Arrange for X events to invoke Tcl scripts") | [grid](grid.htm "Geometry manager that arranges widgets in a grid") | [scrollbar](scrollbar.htm "Create and manipulate 'scrollbar' scrolling control and indicator widgets") | [tk\_patchLevel](tkvars.htm "Variables used or set by Tk") | [ttk::notebook](ttk_notebook.htm "Multi-paned container widget") | | [bindtags](bindtags.htm "Determine which bindings apply to a window, and order of evaluation") | [image](image.htm "Create and manipulate images") | [selection](selection.htm "Manipulate the X selection") | [tk\_popup](popup.htm "Post a popup menu") | [ttk::panedwindow](ttk_panedwindow.htm "Multi-pane container window") | | [bitmap](bitmap.htm "Images that display two colors") | [keysyms](keysyms.htm "Keysyms recognized by Tk") | [send](send.htm "Execute a command in a different application") | [tk\_setPalette](palette.htm "Modify the Tk color palette") | [ttk::progressbar](ttk_progressbar.htm "Provide progress feedback") | | [busy](busy.htm "Confine pointer and keyboard events to a window sub-tree") | [label](label.htm "Create and manipulate 'label' non-interactive text or image widgets") | [spinbox](spinbox.htm "Create and manipulate 'spinbox' value spinner widgets") | [tk\_strictMotif](tkvars.htm "Variables used or set by Tk") | [ttk::radiobutton](ttk_radiobutton.htm "Mutually exclusive option widget") | | [button](button.htm "Create and manipulate 'button' action widgets") | [labelframe](labelframe.htm "Create and manipulate 'labelframe' labelled container widgets") | [text](text.htm "Create and manipulate 'text' hypertext editing widgets") | [tk\_textCopy](text.htm "Create and manipulate 'text' hypertext editing widgets") | [ttk::scale](ttk_scale.htm "Create and manipulate a scale widget") | | [canvas](canvas.htm "Create and manipulate 'canvas' hypergraphics drawing surface widgets") | [listbox](listbox.htm "Create and manipulate 'listbox' item list widgets") | [tk](tk.htm "Manipulate Tk internal state") | [tk\_textCut](text.htm "Create and manipulate 'text' hypertext editing widgets") | [ttk::scrollbar](ttk_scrollbar.htm "Control the viewport of a scrollable widget") | | [checkbutton](checkbutton.htm "Create and manipulate 'checkbutton' boolean selection widgets") | [lower](lower.htm "Change a window's position in the stacking order") | [tk::mac](tk_mac.htm "Access Mac-Specific Functionality on OS X from Tk") | [tk\_textPaste](text.htm "Create and manipulate 'text' hypertext editing widgets") | [ttk::separator](ttk_separator.htm "Separator bar") | | [clipboard](clipboard.htm "Manipulate Tk clipboard") | [menu](menu.htm "Create and manipulate 'menu' widgets and menubars") | [tk\_bisque](palette.htm "Modify the Tk color palette") | [tk\_version](tkvars.htm "Variables used or set by Tk") | [ttk::sizegrip](ttk_sizegrip.htm "Bottom-right corner resize widget") | | [colors](colors.htm "Symbolic color names recognized by Tk") | [menubutton](menubutton.htm "Create and manipulate 'menubutton' pop-up menu indicator widgets") | [tk\_chooseColor](choosecolor.htm "Pops up a dialog box for the user to select a color.") | [tkerror](tkerror.htm "Command invoked to process background errors") | [ttk::spinbox](ttk_spinbox.htm "Selecting text field widget") | | [console](console.htm "Control the console on systems without a real console") | [message](message.htm "Create and manipulate 'message' non-interactive text widgets") | [tk\_chooseDirectory](choosedirectory.htm "Pops up a dialog box for the user to select a directory.") | [tkwait](tkwait.htm "Wait for variable to change or window to be destroyed") | [ttk::style](ttk_style.htm "Manipulate style database") | | [cursors](cursors.htm "Mouse cursors available in Tk") | [option](option.htm "Add/retrieve window options to/from the option database") | [tk\_dialog](dialog.htm "Create modal dialog and wait for response") | [toplevel](toplevel.htm "Create and manipulate 'toplevel' main and popup window widgets") | [ttk::treeview](ttk_treeview.htm "Hierarchical multicolumn data display widget") | | [destroy](destroy.htm "Destroy one or more windows") | [options](options.htm "Standard options supported by widgets") | [tk\_focusFollowsMouse](focusnext.htm "Utility procedures for managing the input focus.") | [ttk::button](ttk_button.htm "Widget that issues a command when pressed") | [ttk::widget](ttk_widget.htm "Standard options and commands supported by Tk themed widgets") | | [entry](entry.htm "Create and manipulate 'entry' one-line text entry widgets") | [pack](pack.htm "Geometry manager that packs around edges of cavity") | [tk\_focusNext](focusnext.htm "Utility procedures for managing the input focus.") | [ttk::checkbutton](ttk_checkbutton.htm "On/off widget") | [ttk\_image](ttk_image.htm "Define an element based on an image") | | [event](event.htm "Miscellaneous event facilities: define virtual events and generate events") | [panedwindow](panedwindow.htm "Create and manipulate 'panedwindow' split container widgets") | [tk\_focusPrev](focusnext.htm "Utility procedures for managing the input focus.") | [ttk::combobox](ttk_combobox.htm "Text field with popdown selection list") | [ttk\_vsapi](ttk_vsapi.htm "Define a Microsoft Visual Styles element") | | [focus](focus.htm "Manage the input focus") | [photo](photo.htm "Full-color images") | [tk\_getOpenFile](getopenfile.htm "Pop up a dialog box for the user to select a file to open or save.") | [ttk::entry](ttk_entry.htm "Editable text field widget") | [winfo](winfo.htm "Return window-related information") | | [font](font.htm "Create and inspect fonts.") | [place](place.htm "Geometry manager for fixed or rubber-sheet placement") | [tk\_getSaveFile](getopenfile.htm "Pop up a dialog box for the user to select a file to open or save.") | [ttk::frame](ttk_frame.htm "Simple container widget") | [wm](wm.htm "Communicate with window manager") | | [fontchooser](fontchooser.htm "Control font selection dialog") | [radiobutton](radiobutton.htm "Create and manipulate 'radiobutton' pick-one widgets") | [tk\_library](tkvars.htm "Variables used or set by Tk") | [ttk::intro](ttk_intro.htm "Introduction to the Tk theme engine") | | [frame](frame.htm "Create and manipulate 'frame' simple container widgets") | [raise](raise.htm "Change a window's position in the stacking order") | [tk\_menuSetFocus](menu.htm "Create and manipulate 'menu' widgets and menubars") | [ttk::label](ttk_label.htm "Display a text string and/or image") | | [geometry](tkvars.htm "Variables used or set by Tk") | [safe::loadTk](loadtk.htm "Load Tk into a safe interpreter.") | [tk\_messageBox](messagebox.htm "Pops up a message window and waits for user response.") | [ttk::labelframe](ttk_labelframe.htm "Container widget with optional label") | Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/contents.htm> tcl_tk ttk_notebook ttk\_notebook ============= [NAME](ttk_notebook.htm#M2) ttk::notebook — Multi-paned container widget [SYNOPSIS](ttk_notebook.htm#M3) [DESCRIPTION](ttk_notebook.htm#M4) [STANDARD OPTIONS](ttk_notebook.htm#M5) [-class, undefined, undefined](ttk_widget.htm#M-class) [-cursor, cursor, Cursor](ttk_widget.htm#M-cursor) [-style, style, Style](ttk_widget.htm#M-style) [-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus) [WIDGET-SPECIFIC OPTIONS](ttk_notebook.htm#M6) [-height, height, Height](ttk_notebook.htm#M7) [-padding, padding, Padding](ttk_notebook.htm#M8) [-width, width, Width](ttk_notebook.htm#M9) [TAB OPTIONS](ttk_notebook.htm#M10) [-state, state, State](ttk_notebook.htm#M11) [-sticky, sticky, Sticky](ttk_notebook.htm#M12) [-padding, padding, Padding](ttk_notebook.htm#M13) [-text, text, Text](ttk_notebook.htm#M14) [-image, image, Image](ttk_notebook.htm#M15) [-compound, compound, Compound](ttk_notebook.htm#M16) [-underline, underline, Underline](ttk_notebook.htm#M17) [TAB IDENTIFIERS](ttk_notebook.htm#M18) [WIDGET COMMAND](ttk_notebook.htm#M19) [*pathname* **add** *window* ?*options...*?](ttk_notebook.htm#M20) [*pathname* **configure** ?*options*?](ttk_notebook.htm#M21) [*pathname* **cget** *option*](ttk_notebook.htm#M22) [*pathname* **forget** *tabid*](ttk_notebook.htm#M23) [*pathname* **hide** *tabid*](ttk_notebook.htm#M24) [*pathname* **identify** *component x y*](ttk_notebook.htm#M25) [*pathname* **identify element** *x y*](ttk_notebook.htm#M26) [*pathname* **identify tab** *x y*](ttk_notebook.htm#M27) [*pathname* **index** *tabid*](ttk_notebook.htm#M28) [*pathname* **insert** *pos subwindow options...*](ttk_notebook.htm#M29) [*pathname* **instate** *statespec* ?*script...*?](ttk_notebook.htm#M30) [*pathname* **select** ?*tabid*?](ttk_notebook.htm#M31) [*pathname* **state** ?*statespec*?](ttk_notebook.htm#M32) [*pathname* **tab** *tabid* ?*-option* ?*value ...*](ttk_notebook.htm#M33) [*pathname* **tabs**](ttk_notebook.htm#M34) [KEYBOARD TRAVERSAL](ttk_notebook.htm#M35) [VIRTUAL EVENTS](ttk_notebook.htm#M36) [EXAMPLE](ttk_notebook.htm#M37) [SEE ALSO](ttk_notebook.htm#M38) [KEYWORDS](ttk_notebook.htm#M39) Name ---- ttk::notebook — Multi-paned container widget Synopsis -------- **ttk::notebook** *pathname* ?*options...*? *pathname* **add** *window* ?*options...*? *pathname* **insert** *index* *window* ?*options...*? Description ----------- A **ttk::notebook** widget manages a collection of windows and displays a single one at a time. Each slave window is associated with a *tab*, which the user may select to change the currently-displayed window. Standard options ---------------- **[-class, undefined, undefined](ttk_widget.htm#M-class)** **[-cursor, cursor, Cursor](ttk_widget.htm#M-cursor)** **[-style, style, Style](ttk_widget.htm#M-style)** **[-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus)** Widget-specific options ----------------------- Command-Line Name: **-height** Database Name: **height** Database Class: **Height** If present and greater than zero, specifies the desired height of the pane area (not including internal padding or tabs). Otherwise, the maximum height of all panes is used. Command-Line Name: **-padding** Database Name: **padding** Database Class: **Padding** Specifies the amount of extra space to add around the outside of the notebook. The padding is a list of up to four length specifications *left top right bottom*. If fewer than four elements are specified, *bottom* defaults to *top*, *right* defaults to *left*, and *top* defaults to *left*. Command-Line Name: **-width** Database Name: **width** Database Class: **Width** If present and greater than zero, specifies the desired width of the pane area (not including internal padding). Otherwise, the maximum width of all panes is used. Tab options ----------- The following options may be specified for individual notebook panes: Command-Line Name: **-state** Database Name: **state** Database Class: **State** Either **normal**, **disabled** or **hidden**. If **disabled**, then the tab is not selectable. If **hidden**, then the tab is not shown. Command-Line Name: **-sticky** Database Name: **sticky** Database Class: **Sticky** Specifies how the slave window is positioned within the pane area. Value is a string containing zero or more of the characters **n, s, e,** or **w**. Each letter refers to a side (north, south, east, or west) that the slave window will “stick” to, as per the **[grid](grid.htm)** geometry manager. Command-Line Name: **-padding** Database Name: **padding** Database Class: **Padding** Specifies the amount of extra space to add between the notebook and this pane. Syntax is the same as for the widget **-padding** option. Command-Line Name: **-text** Database Name: **text** Database Class: **Text** Specifies a string to be displayed in the tab. Command-Line Name: **-image** Database Name: **image** Database Class: **Image** Specifies an image to display in the tab. See *ttk\_widget(n)* for details. Command-Line Name: **-compound** Database Name: **compound** Database Class: **Compound** Specifies how to display the image relative to the text, in the case both **-text** and **-image** are present. See *label(n)* for legal values. Command-Line Name: **-underline** Database Name: **underline** Database Class: **Underline** Specifies the integer index (0-based) of a character to underline in the text string. The underlined character is used for mnemonic activation if **ttk::notebook::enableTraversal** is called. Tab identifiers --------------- The *tabid* argument to the following commands may take any of the following forms: * An integer between zero and the number of tabs; * The name of a slave window; * A positional specification of the form “@*x*,*y*”, which identifies the tab * The literal string “**current**”, which identifies the currently-selected tab; or: * The literal string “**end**”, which returns the number of tabs (only valid for “*pathname* **index**”). Widget command -------------- *pathname* **add** *window* ?*options...*? Adds a new tab to the notebook. See **[TAB OPTIONS](#M10)** for the list of available *options*. If *window* is currently managed by the notebook but hidden, it is restored to its previous position. *pathname* **configure** ?*options*? See *ttk::widget(n)*. *pathname* **cget** *option* See *ttk::widget(n)*. *pathname* **forget** *tabid* Removes the tab specified by *tabid*, unmaps and unmanages the associated window. *pathname* **hide** *tabid* Hides the tab specified by *tabid*. The tab will not be displayed, but the associated window remains managed by the notebook and its configuration remembered. Hidden tabs may be restored with the **add** command. *pathname* **identify** *component x y* Returns the name of the element under the point given by *x* and *y*, or the empty string if no component is present at that location. The following subcommands are supported: *pathname* **identify element** *x y* Returns the name of the element at the specified location. *pathname* **identify tab** *x y* Returns the index of the tab at the specified location. *pathname* **index** *tabid* Returns the numeric index of the tab specified by *tabid*, or the total number of tabs if *tabid* is the string “**end**”. *pathname* **insert** *pos subwindow options...* Inserts a pane at the specified position. *pos* is either the string **end**, an integer index, or the name of a managed subwindow. If *subwindow* is already managed by the notebook, moves it to the specified position. See **[TAB OPTIONS](#M10)** for the list of available options. *pathname* **instate** *statespec* ?*script...*? See *ttk::widget(n)*. *pathname* **select** ?*tabid*? Selects the specified tab. The associated slave window will be displayed, and the previously-selected window (if different) is unmapped. If *tabid* is omitted, returns the widget name of the currently selected pane. *pathname* **state** ?*statespec*? See *ttk::widget(n)*. *pathname* **tab** *tabid* ?*-option* ?*value ...* Query or modify the options of the specific tab. If no *-option* is specified, returns a dictionary of the tab option values. If one *-option* is specified, returns the value of that *option*. Otherwise, sets the *-option*s to the corresponding *value*s. See **[TAB OPTIONS](#M10)** for the available options. *pathname* **tabs** Returns the list of windows managed by the notebook. Keyboard traversal ------------------ To enable keyboard traversal for a toplevel window containing a notebook widget *$nb*, call: ``` ttk::notebook::enableTraversal $nb ``` This will extend the bindings for the toplevel window containing the notebook as follows: * **Control-Tab** selects the tab following the currently selected one. * **Control-Shift-Tab** selects the tab preceding the currently selected one. * **Alt-***K*, where *K* is the mnemonic (underlined) character of any tab, will select that tab. Multiple notebooks in a single toplevel may be enabled for traversal, including nested notebooks. However, notebook traversal only works properly if all panes are direct children of the notebook. Virtual events -------------- The notebook widget generates a **<<NotebookTabChanged>>** virtual event after a new tab is selected. Example ------- ``` pack [**ttk::notebook** .nb] .nb add [frame .nb.f1] -text "First tab" .nb add [frame .nb.f2] -text "Second tab" .nb select .nb.f2 ttk::notebook::enableTraversal .nb ``` See also -------- **[ttk::widget](ttk_widget.htm)**, **[grid](grid.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/ttk_notebook.htm>
programming_docs
tcl_tk ttk_scale ttk\_scale ========== [NAME](ttk_scale.htm#M2) ttk::scale — Create and manipulate a scale widget [SYNOPSIS](ttk_scale.htm#M3) [DESCRIPTION](ttk_scale.htm#M4) [STANDARD OPTIONS](ttk_scale.htm#M5) [-class, undefined, undefined](ttk_widget.htm#M-class) [-cursor, cursor, Cursor](ttk_widget.htm#M-cursor) [-style, style, Style](ttk_widget.htm#M-style) [-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus) [WIDGET-SPECIFIC OPTIONS](ttk_scale.htm#M6) [-command, command, Command](ttk_scale.htm#M7) [-from, from, From](ttk_scale.htm#M8) [-length, length, Length](ttk_scale.htm#M9) [-orient, orient, Orient](ttk_scale.htm#M10) [-to, to, To](ttk_scale.htm#M11) [-value, value, Value](ttk_scale.htm#M12) [-variable, variable, Variable](ttk_scale.htm#M13) [WIDGET COMMAND](ttk_scale.htm#M14) [*pathName* **cget** *option*](ttk_scale.htm#M15) [*pathName* **configure** ?*option*? ?*value option value ...*?](ttk_scale.htm#M16) [*pathName* **get** ?*x y*?](ttk_scale.htm#M17) [*pathName* **identify** *x y*](ttk_scale.htm#M18) [*pathName* **instate** *statespec* ?*script*?](ttk_scale.htm#M19) [*pathName* **set** *value*](ttk_scale.htm#M20) [*pathName* **state** ?*stateSpec*?](ttk_scale.htm#M21) [INTERNAL COMMANDS](ttk_scale.htm#M22) [*pathName* **coords** ?*value*?](ttk_scale.htm#M23) [SEE ALSO](ttk_scale.htm#M24) [KEYWORDS](ttk_scale.htm#M25) Name ---- ttk::scale — Create and manipulate a scale widget Synopsis -------- **ttk::scale** *pathName* ?*options...*? Description ----------- A **ttk::scale** widget is typically used to control the numeric value of a linked variable that varies uniformly over some range. A scale displays a *slider* that can be moved along over a *trough*, with the relative position of the slider over the trough indicating the value of the variable. Standard options ---------------- **[-class, undefined, undefined](ttk_widget.htm#M-class)** **[-cursor, cursor, Cursor](ttk_widget.htm#M-cursor)** **[-style, style, Style](ttk_widget.htm#M-style)** **[-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus)** Widget-specific options ----------------------- Command-Line Name: **-command** Database Name: **command** Database Class: **Command** Specifies the prefix of a Tcl command to invoke whenever the scale's value is changed via a widget command. The actual command consists of this option followed by a space and a real number indicating the new value of the scale. Command-Line Name: **-from** Database Name: **from** Database Class: **From** A real value corresponding to the left or top end of the scale. Command-Line Name: **-length** Database Name: **length** Database Class: **Length** Specifies the desired long dimension of the scale in screen units (i.e. any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**). For vertical scales this is the scale's height; for horizontal scales it is the scale's width. Command-Line Name: **-orient** Database Name: **orient** Database Class: **Orient** Specifies which orientation whether the widget should be laid out horizontally or vertically. Must be either **horizontal** or **vertical** or an abbreviation of one of these. Command-Line Name: **-to** Database Name: **to** Database Class: **To** Specifies a real value corresponding to the right or bottom end of the scale. This value may be either less than or greater than the **-from** option. Command-Line Name: **-value** Database Name: **value** Database Class: **Value** Specifies the current floating-point value of the variable. Command-Line Name: **-variable** Database Name: **variable** Database Class: **Variable** Specifies the name of a global variable to link to the scale. Whenever the value of the variable changes, the scale will update to reflect this value. Whenever the scale is manipulated interactively, the variable will be modified to reflect the scale's new value. Widget command -------------- *pathName* **cget** *option* Returns the current value of the specified *option*; see *ttk::widget(n)*. *pathName* **configure** ?*option*? ?*value option value ...*? Modify or query widget options; see *ttk::widget(n)*. *pathName* **get** ?*x y*? Get the current value of the **-value** option, or the value corresponding to the coordinates *x,y* if they are specified. *X* and *y* are pixel coordinates relative to the scale widget origin. *pathName* **identify** *x y* Returns the name of the element at position *x*, *y*. See *ttk::widget(n)*. *pathName* **instate** *statespec* ?*script*? Test the widget state; see *ttk::widget(n)*. *pathName* **set** *value* Set the value of the widget (i.e. the **-value** option) to *value*. The value will be clipped to the range given by the **-from** and **-to** options. Note that setting the linked variable (i.e. the variable named in the **-variable** option) does not cause such clipping. *pathName* **state** ?*stateSpec*? Modify or query the widget state; see *ttk::widget(n)*. Internal commands ----------------- *pathName* **coords** ?*value*? Get the coordinates corresponding to *value*, or the coordinates corresponding to the current value of the **-value** option if *value* is omitted. See also -------- **[ttk::widget](ttk_widget.htm)**, **[scale](scale.htm)** tcl_tk ttk_radiobutton ttk\_radiobutton ================ [NAME](ttk_radiobutton.htm#M2) ttk::radiobutton — Mutually exclusive option widget [SYNOPSIS](ttk_radiobutton.htm#M3) [DESCRIPTION](ttk_radiobutton.htm#M4) [STANDARD OPTIONS](ttk_radiobutton.htm#M5) [-class, undefined, undefined](ttk_widget.htm#M-class) [-compound, compound, Compound](ttk_widget.htm#M-compound) [-cursor, cursor, Cursor](ttk_widget.htm#M-cursor) [-image, image, Image](ttk_widget.htm#M-image) [-state, state, State](ttk_widget.htm#M-state) [-style, style, Style](ttk_widget.htm#M-style) [-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus) [-text, text, Text](ttk_widget.htm#M-text) [-textvariable, textVariable, Variable](ttk_widget.htm#M-textvariable) [-underline, underline, Underline](ttk_widget.htm#M-underline) [-width, width, Width](ttk_widget.htm#M-width) [WIDGET-SPECIFIC OPTIONS](ttk_radiobutton.htm#M6) [-command, command, Command](ttk_radiobutton.htm#M7) [-value, Value, Value](ttk_radiobutton.htm#M8) [-variable, variable, Variable](ttk_radiobutton.htm#M9) [WIDGET COMMAND](ttk_radiobutton.htm#M10) [*pathname* **invoke**](ttk_radiobutton.htm#M11) [WIDGET STATES](ttk_radiobutton.htm#M12) [STANDARD STYLES](ttk_radiobutton.htm#M13) [SEE ALSO](ttk_radiobutton.htm#M14) [KEYWORDS](ttk_radiobutton.htm#M15) Name ---- ttk::radiobutton — Mutually exclusive option widget Synopsis -------- **ttk::radiobutton** *pathName* ?*options*? Description ----------- **ttk::radiobutton** widgets are used in groups to show or change a set of mutually-exclusive options. Radiobuttons are linked to a Tcl variable, and have an associated value; when a radiobutton is clicked, it sets the variable to its associated value. Standard options ---------------- **[-class, undefined, undefined](ttk_widget.htm#M-class)** **[-compound, compound, Compound](ttk_widget.htm#M-compound)** **[-cursor, cursor, Cursor](ttk_widget.htm#M-cursor)** **[-image, image, Image](ttk_widget.htm#M-image)** **[-state, state, State](ttk_widget.htm#M-state)** **[-style, style, Style](ttk_widget.htm#M-style)** **[-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus)** **[-text, text, Text](ttk_widget.htm#M-text)** **[-textvariable, textVariable, Variable](ttk_widget.htm#M-textvariable)** **[-underline, underline, Underline](ttk_widget.htm#M-underline)** **[-width, width, Width](ttk_widget.htm#M-width)** Widget-specific options ----------------------- Command-Line Name: **-command** Database Name: **command** Database Class: **Command** A Tcl script to evaluate whenever the widget is invoked. Command-Line Name: **-value** Database Name: **Value** Database Class: **Value** The value to store in the associated **-variable** when the widget is selected. Command-Line Name: **-variable** Database Name: **variable** Database Class: **Variable** The name of a global variable whose value is linked to the widget. Default value is **::selectedButton**. Widget command -------------- In addition to the standard **cget**, **configure**, **identify**, **instate**, and **state** commands, radiobuttons support the following additional widget commands: *pathname* **invoke** Sets the **-variable** to the **-value**, selects the widget, and evaluates the associated **-command**. Returns the result of the **-command**, or the empty string if no **-command** is specified. Widget states ------------- The widget does not respond to user input if the **disabled** state is set. The widget sets the **selected** state whenever the linked **-variable** is set to the widget's **-value**, and clears it otherwise. The widget sets the **alternate** state whenever the linked **-variable** is unset. (The **alternate** state may be used to indicate a “tri-state” or “indeterminate” selection.) Standard styles --------------- **Ttk::radiobutton** widgets support the **Toolbutton** style in all standard themes, which is useful for creating widgets for toolbars. See also -------- **[ttk::widget](ttk_widget.htm)**, **[ttk::checkbutton](ttk_checkbutton.htm)**, **[radiobutton](radiobutton.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/ttk_radiobutton.htm> tcl_tk photo photo ===== [NAME](photo.htm#M2) photo — Full-color images [SYNOPSIS](photo.htm#M3) [DESCRIPTION](photo.htm#M4) [CREATING PHOTOS](photo.htm#M5) [**-data** *string*](photo.htm#M6) [**-format** *format-name*](photo.htm#M7) [**-file** *name*](photo.htm#M8) [**-gamma** *value*](photo.htm#M9) [**-height** *number*](photo.htm#M10) [**-palette** *palette-spec*](photo.htm#M11) [**-width** *number*](photo.htm#M12) [IMAGE COMMAND](photo.htm#M13) [*imageName* **blank**](photo.htm#M14) [*imageName* **cget** *option*](photo.htm#M15) [*imageName* **configure** ?*option*? ?*value option value ...*?](photo.htm#M16) [*imageName* **copy** *sourceImage* ?*option value(s) ...*?](photo.htm#M17) [**-from** *x1 y1 x2 y2*](photo.htm#M18) [**-to** *x1 y1 x2 y2*](photo.htm#M19) [**-shrink**](photo.htm#M20) [**-zoom** *x y*](photo.htm#M21) [**-subsample** *x y*](photo.htm#M22) [**-compositingrule** *rule*](photo.htm#M23) [*imageName* **data** ?*option value(s) ...*?](photo.htm#M24) [**-background** *color*](photo.htm#M25) [**-format** *format-name*](photo.htm#M26) [**-from** *x1 y1 x2 y2*](photo.htm#M27) [**-grayscale**](photo.htm#M28) [*imageName* **get** *x y*](photo.htm#M29) [*imageName* **put** *data* ?*option value(s) ...*?](photo.htm#M30) [**-format** *format-name*](photo.htm#M31) [**-to** *x1 y1* ?*x2 y2*?](photo.htm#M32) [*imageName* **read** *filename* ?*option value(s) ...*?](photo.htm#M33) [**-format** *format-name*](photo.htm#M34) [**-from** *x1 y1 x2 y2*](photo.htm#M35) [**-shrink**](photo.htm#M36) [**-to** *x y*](photo.htm#M37) [*imageName* **redither**](photo.htm#M38) [*imageName* **transparency** *subcommand* ?*arg arg ...*?](photo.htm#M39) [*imageName* **transparency get** *x y*](photo.htm#M40) [*imageName* **transparency set** *x y boolean*](photo.htm#M41) [*imageName* **write** *filename* ?*option value(s) ...*?](photo.htm#M42) [**-background** *color*](photo.htm#M43) [**-format** *format-name*](photo.htm#M44) [**-from** *x1 y1 x2 y2*](photo.htm#M45) [**-grayscale**](photo.htm#M46) [IMAGE FORMATS](photo.htm#M47) [FORMAT SUBOPTIONS](photo.htm#M48) [**gif -index** *indexValue*](photo.htm#M49) [**png -alpha** *alphaValue*](photo.htm#M50) [COLOR ALLOCATION](photo.htm#M51) [CREDITS](photo.htm#M52) [EXAMPLE](photo.htm#M53) [SEE ALSO](photo.htm#M54) [KEYWORDS](photo.htm#M55) Name ---- photo — Full-color images Synopsis -------- **image create photo** ?*name*? ?*options*? *imageName* **blank** *imageName* **cget** *option* *imageName* **configure** ?*option*? ?*value option value ...*? *imageName* **copy** *sourceImage* ?*option value(s) ...*? *imageName* **data** ?*option value(s) ...*? *imageName* **get** *x y* *imageName* **put** *data* ?*option value(s) ...*? *imageName* **read** *filename* ?*option value(s) ...*? *imageName* **redither** *imageName* **transparency** *subcommand* ?*arg arg ...*? *imageName* **write** *filename* ?*option value(s) ...*? Description ----------- A photo is an image whose pixels can display any color or be transparent. A photo image is stored internally in full color (32 bits per pixel), and is displayed using dithering if necessary. Image data for a photo image can be obtained from a file or a string, or it can be supplied from C code through a procedural interface. At present, only PNG, GIF and PPM/PGM formats are supported, but an interface exists to allow additional image file formats to be added easily. A photo image is transparent in regions where no image data has been supplied or where it has been set transparent by the **transparency set** subcommand. Creating photos --------------- Like all images, photos are created using the **[image create](image.htm)** command. Photos support the following *options*: **-data** *string* Specifies the contents of the image as a string. The string should contain binary data or, for some formats, base64-encoded data (this is currently guaranteed to be supported for PNG and GIF images). The format of the string must be one of those for which there is an image file format handler that will accept string data. If both the **-data** and **-file** options are specified, the **-file** option takes precedence. **-format** *format-name* Specifies the name of the file format for the data specified with the **-data** or **-file** option. **-file** *name* *name* gives the name of a file that is to be read to supply data for the photo image. The file format must be one of those for which there is an image file format handler that can read data. **-gamma** *value* Specifies that the colors allocated for displaying this image in a window should be corrected for a non-linear display with the specified gamma exponent value. (The intensity produced by most CRT displays is a power function of the input value, to a good approximation; gamma is the exponent and is typically around 2). The value specified must be greater than zero. The default value is one (no correction). In general, values greater than one will make the image lighter, and values less than one will make it darker. **-height** *number* Specifies the height of the image, in pixels. This option is useful primarily in situations where the user wishes to build up the contents of the image piece by piece. A value of zero (the default) allows the image to expand or shrink vertically to fit the data stored in it. **-palette** *palette-spec* Specifies the resolution of the color cube to be allocated for displaying this image, and thus the number of colors used from the colormaps of the windows where it is displayed. The *palette-spec* string may be either a single decimal number, specifying the number of shades of gray to use, or three decimal numbers separated by slashes (/), specifying the number of shades of red, green and blue to use, respectively. If the first form (a single number) is used, the image will be displayed in monochrome (i.e., grayscale). **-width** *number* Specifies the width of the image, in pixels. This option is useful primarily in situations where the user wishes to build up the contents of the image piece by piece. A value of zero (the default) allows the image to expand or shrink horizontally to fit the data stored in it. Image command ------------- When a photo image is created, Tk also creates a new command whose name is the same as the image. This command may be used to invoke various operations on the image. It has the following general form: ``` *imageName option* ?*arg arg ...*? ``` *Option* and the *arg*s determine the exact behavior of the command. Those options that write data to the image generally expand the size of the image, if necessary, to accommodate the data written to the image, unless the user has specified non-zero values for the **-width** and/or **-height** configuration options, in which case the width and/or height, respectively, of the image will not be changed. The following commands are possible for photo images: *imageName* **blank** Blank the image; that is, set the entire image to have no data, so it will be displayed as transparent, and the background of whatever window it is displayed in will show through. *imageName* **cget** *option* Returns the current value of the configuration option given by *option*. *Option* may have any of the values accepted by the **[image create](image.htm)** **photo** command. *imageName* **configure** ?*option*? ?*value option value ...*? Query or modify the configuration options for the image. If no *option* is specified, returns a list describing all of the available options for *imageName* (see **[Tk\_ConfigureInfo](https://www.tcl.tk/man/tcl/TkLib/ConfigWidg.htm)** for information on the format of this list). If *option* is specified with no *value*, then the command returns a list describing the one named option (this list will be identical to the corresponding sublist of the value returned if no *option* is specified). If one or more *option-value* pairs are specified, then the command modifies the given option(s) to have the given value(s); in this case the command returns an empty string. *Option* may have any of the values accepted by the **[image create](image.htm)** **photo** command. *imageName* **copy** *sourceImage* ?*option value(s) ...*? Copies a region from the image called *sourceImage* (which must be a photo image) to the image called *imageName*, possibly with pixel zooming and/or subsampling. If no options are specified, this command copies the whole of *sourceImage* into *imageName*, starting at coordinates (0,0) in *imageName*. The following options may be specified: **-from** *x1 y1 x2 y2* Specifies a rectangular sub-region of the source image to be copied. (*x1,y1*) and (*x2,y2*) specify diagonally opposite corners of the rectangle. If *x2* and *y2* are not specified, the default value is the bottom-right corner of the source image. The pixels copied will include the left and top edges of the specified rectangle but not the bottom or right edges. If the **-from** option is not given, the default is the whole source image. **-to** *x1 y1 x2 y2* Specifies a rectangular sub-region of the destination image to be affected. (*x1,y1*) and (*x2,y2*) specify diagonally opposite corners of the rectangle. If *x2* and *y2* are not specified, the default value is (*x1,y1*) plus the size of the source region (after subsampling and zooming, if specified). If *x2* and *y2* are specified, the source region will be replicated if necessary to fill the destination region in a tiled fashion. **-shrink** Specifies that the size of the destination image should be reduced, if necessary, so that the region being copied into is at the bottom-right corner of the image. This option will not affect the width or height of the image if the user has specified a non-zero value for the **-width** or **-height** configuration option, respectively. **-zoom** *x y* Specifies that the source region should be magnified by a factor of *x* in the X direction and *y* in the Y direction. If *y* is not given, the default value is the same as *x*. With this option, each pixel in the source image will be expanded into a block of *x* x *y* pixels in the destination image, all the same color. *x* and *y* must be greater than 0. **-subsample** *x y* Specifies that the source image should be reduced in size by using only every *x*th pixel in the X direction and *y*th pixel in the Y direction. Negative values will cause the image to be flipped about the Y or X axes, respectively. If *y* is not given, the default value is the same as *x*. **-compositingrule** *rule* Specifies how transparent pixels in the source image are combined with the destination image. When a compositing rule of *overlay* is set, the old contents of the destination image are visible, as if the source image were printed on a piece of transparent film and placed over the top of the destination. When a compositing rule of *set* is set, the old contents of the destination image are discarded and the source image is used as-is. The default compositing rule is *overlay*. *imageName* **data** ?*option value(s) ...*? Returns image data in the form of a string. The following options may be specified: **-background** *color* If the color is specified, the data will not contain any transparency information. In all transparent pixels the color will be replaced by the specified color. **-format** *format-name* Specifies the name of the image file format handler to be used. Specifically, this subcommand searches for the first handler whose name matches an initial substring of *format-name* and which has the capability to write a string containing this image data. If this option is not given, this subcommand uses a format that consists of a list (one element per row) of lists (one element per pixel/column) of colors in “**#***rrggbb*” format (where *rr* is a pair of hexadecimal digits for the red channel, *gg* for green, and *bb* for blue). **-from** *x1 y1 x2 y2* Specifies a rectangular region of *imageName* to be returned. If only *x1* and *y1* are specified, the region extends from *(x1,y1)* to the bottom-right corner of *imageName*. If all four coordinates are given, they specify diagonally opposite corners of the rectangular region, including x1,y1 and excluding x2,y2. The default, if this option is not given, is the whole image. **-grayscale** If this options is specified, the data will not contain color information. All pixel data will be transformed into grayscale. *imageName* **get** *x y* Returns the color of the pixel at coordinates (*x*,*y*) in the image as a list of three integers between 0 and 255, representing the red, green and blue components respectively. *imageName* **put** *data* ?*option value(s) ...*? Sets pixels in *imageName* to the data specified in *data*. This command first searches the list of image file format handlers for a handler that can interpret the data in *data*, and then reads the image encoded within into *imageName* (the destination image). If *data* does not match any known format, an attempt to interpret it as a (top-to-bottom) list of scan-lines is made, with each scan-line being a (left-to-right) list of pixel colors (see **[Tk\_GetColor](https://www.tcl.tk/man/tcl/TkLib/GetColor.htm)** for a description of valid colors.) Every scan-line must be of the same length. Note that when *data* is a single color name, you are instructing Tk to fill a rectangular region with that color. The following options may be specified: **-format** *format-name* Specifies the format of the image data in *data*. Specifically, only image file format handlers whose names begin with *format-name* will be used while searching for an image data format handler to read the data. **-to** *x1 y1* ?*x2 y2*? Specifies the coordinates of the top-left corner (*x1*,*y1*) of the region of *imageName* into which the image data will be copied. The default position is (0,0). If *x2*,*y2* is given and *data* is not large enough to cover the rectangle specified by this option, the image data extracted will be tiled so it covers the entire destination rectangle. Note that if *data* specifies a single color value, then a region extending to the bottom-right corner represented by (*x2*,*y2*) will be filled with that color. *imageName* **read** *filename* ?*option value(s) ...*? Reads image data from the file named *filename* into the image. This command first searches the list of image file format handlers for a handler that can interpret the data in *filename*, and then reads the image in *filename* into *imageName* (the destination image). The following options may be specified: **-format** *format-name* Specifies the format of the image data in *filename*. Specifically, only image file format handlers whose names begin with *format-name* will be used while searching for an image data format handler to read the data. **-from** *x1 y1 x2 y2* Specifies a rectangular sub-region of the image file data to be copied to the destination image. If only *x1* and *y1* are specified, the region extends from (*x1,y1*) to the bottom-right corner of the image in the image file. If all four coordinates are specified, they specify diagonally opposite corners or the region. The default, if this option is not specified, is the whole of the image in the image file. **-shrink** If this option, the size of *imageName* will be reduced, if necessary, so that the region into which the image file data are read is at the bottom-right corner of the *imageName*. This option will not affect the width or height of the image if the user has specified a non-zero value for the **-width** or **-height** configuration option, respectively. **-to** *x y* Specifies the coordinates of the top-left corner of the region of *imageName* into which data from *filename* are to be read. The default is (0,0). *imageName* **redither** The dithering algorithm used in displaying photo images propagates quantization errors from one pixel to its neighbors. If the image data for *imageName* is supplied in pieces, the dithered image may not be exactly correct. Normally the difference is not noticeable, but if it is a problem, this command can be used to recalculate the dithered image in each window where the image is displayed. *imageName* **transparency** *subcommand* ?*arg arg ...*? Allows examination and manipulation of the transparency information in the photo image. Several subcommands are available: *imageName* **transparency get** *x y* Returns a boolean indicating if the pixel at (*x*,*y*) is transparent. *imageName* **transparency set** *x y boolean* Makes the pixel at (*x*,*y*) transparent if *boolean* is true, and makes that pixel opaque otherwise. *imageName* **write** *filename* ?*option value(s) ...*? Writes image data from *imageName* to a file named *filename*. The following options may be specified: **-background** *color* If the color is specified, the data will not contain any transparency information. In all transparent pixels the color will be replaced by the specified color. **-format** *format-name* Specifies the name of the image file format handler to be used to write the data to the file. Specifically, this subcommand searches for the first handler whose name matches an initial substring of *format-name* and which has the capability to write an image file. If this option is not given, the format is guessed from the file extension. If that cannot be determined, this subcommand uses the first handler that has the capability to write an image file. **-from** *x1 y1 x2 y2* Specifies a rectangular region of *imageName* to be written to the image file. If only *x1* and *y1* are specified, the region extends from *(x1,y1)* to the bottom-right corner of *imageName*. If all four coordinates are given, they specify diagonally opposite corners of the rectangular region. The default, if this option is not given, is the whole image. **-grayscale** If this options is specified, the data will not contain color information. All pixel data will be transformed into grayscale. Image formats ------------- The photo image code is structured to allow handlers for additional image file formats to be added easily. The photo image code maintains a list of these handlers. Handlers are added to the list by registering them with a call to **[Tk\_CreatePhotoImageFormat](https://www.tcl.tk/man/tcl/TkLib/CrtPhImgFmt.htm)**. The standard Tk distribution comes with handlers for PPM/PGM, PNG and GIF formats, which are automatically registered on initialization. When reading an image file or processing string data specified with the **-data** configuration option, the photo image code invokes each handler in turn until one is found that claims to be able to read the data in the file or string. Usually this will find the correct handler, but if it does not, the user may give a format name with the **-format** option to specify which handler to use. In fact the photo image code will try those handlers whose names begin with the string specified for the **-format** option (the comparison is case-insensitive). For example, if the user specifies **-format gif**, then a handler named GIF87 or GIF89 may be invoked, but a handler named JPEG may not (assuming that such handlers had been registered). When writing image data to a file, the processing of the **-format** option is slightly different: the string value given for the **-format** option must begin with the complete name of the requested handler, and may contain additional information following that, which the handler can use, for example, to specify which variant to use of the formats supported by the handler. Note that not all image handlers may support writing transparency data to a file, even where the target image format does. ### Format suboptions Some image formats support sub-options, which are specified at the time that the image is loaded using additional words in the **-format** option. At the time of writing, the following are supported: **gif -index** *indexValue* When parsing a multi-part GIF image, Tk normally only accesses the first image. By giving the **-index** sub-option, the *indexValue*'th value may be used instead. The *indexValue* must be an integer from 0 up to the number of image parts in the GIF data. **png -alpha** *alphaValue* An additional alpha filtering for the overall image, which allows the background on which the image is displayed to show through. This usually also has the effect of desaturating the image. The *alphaValue* must be between 0.0 and 1.0. Color allocation ---------------- When a photo image is displayed in a window, the photo image code allocates colors to use to display the image and dithers the image, if necessary, to display a reasonable approximation to the image using the colors that are available. The colors are allocated as a color cube, that is, the number of colors allocated is the product of the number of shades of red, green and blue. Normally, the number of colors allocated is chosen based on the depth of the window. For example, in an 8-bit PseudoColor window, the photo image code will attempt to allocate seven shades of red, seven shades of green and four shades of blue, for a total of 198 colors. In a 1-bit StaticGray (monochrome) window, it will allocate two colors, black and white. In a 24-bit DirectColor or TrueColor window, it will allocate 256 shades each of red, green and blue. Fortunately, because of the way that pixel values can be combined in DirectColor and TrueColor windows, this only requires 256 colors to be allocated. If not all of the colors can be allocated, the photo image code reduces the number of shades of each primary color and tries again. The user can exercise some control over the number of colors that a photo image uses with the **-palette** configuration option. If this option is used, it specifies the maximum number of shades of each primary color to try to allocate. It can also be used to force the image to be displayed in shades of gray, even on a color display, by giving a single number rather than three numbers separated by slashes. Credits ------- The photo image type was designed and implemented by Paul Mackerras, based on his earlier photo widget and some suggestions from John Ousterhout. Example ------- Load an image from a file and tile it to the size of a window, which is useful for producing a tiled background: ``` # These lines should be called once **image create photo** untiled -file "theFile.ppm" **image create photo** tiled # These lines should be called whenever .someWidget changes # size; a <Configure> binding is useful here set width [winfo width .someWidget] set height [winfo height .someWidget] tiled **copy** untiled -to 0 0 $width $height -shrink ``` The PNG image loader allows the application of an additional alpha factor during loading, which is useful for generating images suitable for disabled buttons: ``` **image create photo** icon -file "icon.png" **image create photo** iconDisabled -file "icon.png" \ -format "png -alpha 0.5" button .b -image icon -disabledimage iconDisabled ``` See also -------- **[image](image.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/photo.htm>
programming_docs
tcl_tk ttk_menubutton ttk\_menubutton =============== [NAME](ttk_menubutton.htm#M2) ttk::menubutton — Widget that pops down a menu when pressed [SYNOPSIS](ttk_menubutton.htm#M3) [DESCRIPTION](ttk_menubutton.htm#M4) [STANDARD OPTIONS](ttk_menubutton.htm#M5) [-class, undefined, undefined](ttk_widget.htm#M-class) [-compound, compound, Compound](ttk_widget.htm#M-compound) [-cursor, cursor, Cursor](ttk_widget.htm#M-cursor) [-image, image, Image](ttk_widget.htm#M-image) [-state, state, State](ttk_widget.htm#M-state) [-style, style, Style](ttk_widget.htm#M-style) [-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus) [-text, text, Text](ttk_widget.htm#M-text) [-textvariable, textVariable, Variable](ttk_widget.htm#M-textvariable) [-underline, underline, Underline](ttk_widget.htm#M-underline) [-width, width, Width](ttk_widget.htm#M-width) [WIDGET-SPECIFIC OPTIONS](ttk_menubutton.htm#M6) [-direction, direction, Direction](ttk_menubutton.htm#M7) [-menu, menu, Menu](ttk_menubutton.htm#M8) [WIDGET COMMAND](ttk_menubutton.htm#M9) [STANDARD STYLES](ttk_menubutton.htm#M10) [SEE ALSO](ttk_menubutton.htm#M11) [KEYWORDS](ttk_menubutton.htm#M12) Name ---- ttk::menubutton — Widget that pops down a menu when pressed Synopsis -------- **ttk::menubutton** *pathName* ?*options*? Description ----------- A **ttk::menubutton** widget displays a textual label and/or image, and displays a menu when pressed. Standard options ---------------- **[-class, undefined, undefined](ttk_widget.htm#M-class)** **[-compound, compound, Compound](ttk_widget.htm#M-compound)** **[-cursor, cursor, Cursor](ttk_widget.htm#M-cursor)** **[-image, image, Image](ttk_widget.htm#M-image)** **[-state, state, State](ttk_widget.htm#M-state)** **[-style, style, Style](ttk_widget.htm#M-style)** **[-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus)** **[-text, text, Text](ttk_widget.htm#M-text)** **[-textvariable, textVariable, Variable](ttk_widget.htm#M-textvariable)** **[-underline, underline, Underline](ttk_widget.htm#M-underline)** **[-width, width, Width](ttk_widget.htm#M-width)** Widget-specific options ----------------------- Command-Line Name: **-direction** Database Name: **direction** Database Class: **Direction** Specifies where the menu is to be popped up relative to the menubutton. One of: **above**, **below**, **left**, **right**, or **flush**. The default is **below**. **flush** pops the menu up directly over the menubutton. Command-Line Name: **-menu** Database Name: **[menu](menu.htm)** Database Class: **[Menu](menu.htm)** Specifies the path name of the menu associated with the menubutton. To be on the safe side, the menu ought to be a direct child of the menubutton. Widget command -------------- Menubutton widgets support the standard **cget**, **configure**, **identify**, **instate**, and **state** methods. No other widget methods are used. Standard styles --------------- **Ttk::menubutton** widgets support the **Toolbutton** style in all standard themes, which is useful for creating widgets for toolbars. See also -------- **[ttk::widget](ttk_widget.htm)**, **[menu](menu.htm)**, **[menubutton](menubutton.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/ttk_menubutton.htm> tcl_tk toplevel toplevel ======== [NAME](toplevel.htm#M2) toplevel — Create and manipulate 'toplevel' main and popup window widgets [SYNOPSIS](toplevel.htm#M3) [STANDARD OPTIONS](toplevel.htm#M4) [-borderwidth or -bd, borderWidth, BorderWidth](options.htm#M-borderwidth) [-cursor, cursor, Cursor](options.htm#M-cursor) [-highlightbackground, highlightBackground, HighlightBackground](options.htm#M-highlightbackground) [-highlightcolor, highlightColor, HighlightColor](options.htm#M-highlightcolor) [-highlightthickness, highlightThickness, HighlightThickness](options.htm#M-highlightthickness) [-padx, padX, Pad](options.htm#M-padx) [-pady, padY, Pad](options.htm#M-pady) [-relief, relief, Relief](options.htm#M-relief) [-takefocus, takeFocus, TakeFocus](options.htm#M-takefocus) [WIDGET-SPECIFIC OPTIONS](toplevel.htm#M5) [-background, background, Background](toplevel.htm#M6) [-class, class, Class](toplevel.htm#M7) [-colormap, colormap, Colormap](toplevel.htm#M8) [-container, container, Container](toplevel.htm#M9) [-height, height, Height](toplevel.htm#M10) [-menu, menu, Menu](toplevel.htm#M11) [-screen, ,](toplevel.htm#M12) [-use, use, Use](toplevel.htm#M13) [-visual, visual, Visual](toplevel.htm#M14) [-width, width, Width](toplevel.htm#M15) [DESCRIPTION](toplevel.htm#M16) [WIDGET COMMAND](toplevel.htm#M17) [*pathName* **cget** *option*](toplevel.htm#M18) [*pathName* **configure** ?*option*? ?*value option value ...*?](toplevel.htm#M19) [BINDINGS](toplevel.htm#M20) [SEE ALSO](toplevel.htm#M21) [KEYWORDS](toplevel.htm#M22) Name ---- toplevel — Create and manipulate 'toplevel' main and popup window widgets Synopsis -------- **toplevel** *pathName* ?*options*? Standard options ---------------- **[-borderwidth or -bd, borderWidth, BorderWidth](options.htm#M-borderwidth)** **[-cursor, cursor, Cursor](options.htm#M-cursor)** **[-highlightbackground, highlightBackground, HighlightBackground](options.htm#M-highlightbackground)** **[-highlightcolor, highlightColor, HighlightColor](options.htm#M-highlightcolor)** **[-highlightthickness, highlightThickness, HighlightThickness](options.htm#M-highlightthickness)** **[-padx, padX, Pad](options.htm#M-padx)** **[-pady, padY, Pad](options.htm#M-pady)** **[-relief, relief, Relief](options.htm#M-relief)** **[-takefocus, takeFocus, TakeFocus](options.htm#M-takefocus)** Widget-specific options ----------------------- Command-Line Name: **-background** Database Name: **background** Database Class: **Background** This option is the same as the standard **-background** option except that its value may also be specified as an empty string. In this case, the widget will display no background or border, and no colors will be consumed from its colormap for its background and border. Command-Line Name: **-class** Database Name: **class** Database Class: **Class** Specifies a class for the window. This class will be used when querying the option database for the window's other options, and it will also be used later for other purposes such as bindings. The **-class** option may not be changed with the **configure** widget command. Command-Line Name: **-colormap** Database Name: **colormap** Database Class: **Colormap** Specifies a colormap to use for the window. The value may be either **new**, in which case a new colormap is created for the window and its children, or the name of another window (which must be on the same screen and have the same visual as *pathName*), in which case the new window will use the colormap from the specified window. If the **-colormap** option is not specified, the new window uses the default colormap of its screen. This option may not be changed with the **configure** widget command. Command-Line Name: **-container** Database Name: **container** Database Class: **Container** The value must be a boolean. If true, it means that this window will be used as a container in which some other application will be embedded (for example, a Tk toplevel can be embedded using the **-use** option). The window will support the appropriate window manager protocols for things like geometry requests. The window should not have any children of its own in this application. This option may not be changed with the **configure** widget command. Command-Line Name: **-height** Database Name: **height** Database Class: **Height** Specifies the desired height for the window in any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**. If this option is less than or equal to zero then the window will not request any size at all. Command-Line Name: **-menu** Database Name: **[menu](menu.htm)** Database Class: **[Menu](menu.htm)** Specifies a menu widget to be used as a menubar. On the Macintosh, the menubar will be displayed across the top of the main monitor. On Microsoft Windows and all UNIX platforms, the menu will appear across the toplevel window as part of the window dressing maintained by the window manager. Command-Line Name: **-screen** Database Name: Database Class: Specifies the screen on which to place the new window. Any valid screen name may be used, even one associated with a different display. Defaults to the same screen as its parent. This option is special in that it may not be specified via the option database, and it may not be modified with the **configure** widget command. Command-Line Name: **-use** Database Name: **use** Database Class: **Use** This option is used for embedding. If the value is not an empty string, it must be the window identifier of a container window, specified as a hexadecimal string like the ones returned by the **[winfo id](winfo.htm)** command. The toplevel widget will be created as a child of the given container instead of the root window for the screen. If the container window is in a Tk application, it must be a frame or toplevel widget for which the **-container** option was specified. This option may not be changed with the **configure** widget command. Command-Line Name: **-visual** Database Name: **visual** Database Class: **Visual** Specifies visual information for the new window in any of the forms accepted by **[Tk\_GetVisual](https://www.tcl.tk/man/tcl/TkLib/GetVisual.htm)**. If this option is not specified, the new window will use the default visual for its screen. The **-visual** option may not be modified with the **configure** widget command. Command-Line Name: **-width** Database Name: **width** Database Class: **Width** Specifies the desired width for the window in any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**. If this option is less than or equal to zero then the window will not request any size at all. Description ----------- The **toplevel** command creates a new toplevel widget (given by the *pathName* argument). Additional options, described above, may be specified on the command line or in the option database to configure aspects of the toplevel such as its background color and relief. The **toplevel** command returns the path name of the new window. A toplevel is similar to a frame except that it is created as a top-level window: its X parent is the root window of a screen rather than the logical parent from its path name. The primary purpose of a toplevel is to serve as a container for dialog boxes and other collections of widgets. The only visible features of a toplevel are its background color and an optional 3-D border to make the toplevel appear raised or sunken. Widget command -------------- The **toplevel** command creates a new Tcl command whose name is the same as the path name of the toplevel's window. This command may be used to invoke various operations on the widget. It has the following general form: ``` *pathName option* ?*arg arg ...*? ``` *PathName* is the name of the command, which is the same as the toplevel widget's path name. *Option* and the *arg*s determine the exact behavior of the command. The following commands are possible for toplevel widgets: *pathName* **cget** *option* Returns the current value of the configuration option given by *option*. *Option* may have any of the values accepted by the **toplevel** command. *pathName* **configure** ?*option*? ?*value option value ...*? Query or modify the configuration options of the widget. If no *option* is specified, returns a list describing all of the available options for *pathName* (see **[Tk\_ConfigureInfo](https://www.tcl.tk/man/tcl/TkLib/ConfigWidg.htm)** for information on the format of this list). If *option* is specified with no *value*, then the command returns a list describing the one named option (this list will be identical to the corresponding sublist of the value returned if no *option* is specified). If one or more *option-value* pairs are specified, then the command modifies the given widget option(s) to have the given value(s); in this case the command returns an empty string. *Option* may have any of the values accepted by the **toplevel** command. Bindings -------- When a new toplevel is created, it has no default event bindings: toplevels are not intended to be interactive. See also -------- **[frame](frame.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/toplevel.htm> tcl_tk labelframe labelframe ========== [NAME](labelframe.htm#M2) labelframe — Create and manipulate 'labelframe' labelled container widgets [SYNOPSIS](labelframe.htm#M3) [STANDARD OPTIONS](labelframe.htm#M4) [-borderwidth or -bd, borderWidth, BorderWidth](options.htm#M-borderwidth) [-cursor, cursor, Cursor](options.htm#M-cursor) [-font, font, Font](options.htm#M-font) [-foreground or -fg, foreground, Foreground](options.htm#M-foreground) [-highlightbackground, highlightBackground, HighlightBackground](options.htm#M-highlightbackground) [-highlightcolor, highlightColor, HighlightColor](options.htm#M-highlightcolor) [-highlightthickness, highlightThickness, HighlightThickness](options.htm#M-highlightthickness) [-padx, padX, Pad](options.htm#M-padx) [-pady, padY, Pad](options.htm#M-pady) [-relief, relief, Relief](options.htm#M-relief) [-takefocus, takeFocus, TakeFocus](options.htm#M-takefocus) [-text, text, Text](options.htm#M-text) [WIDGET-SPECIFIC OPTIONS](labelframe.htm#M5) [-background, background, Background](labelframe.htm#M6) [-class, class, Class](labelframe.htm#M7) [-colormap, colormap, Colormap](labelframe.htm#M8) [-height, height, Height](labelframe.htm#M9) [-labelanchor, labelAnchor, LabelAnchor](labelframe.htm#M10) [-labelwidget, labelWidget, LabelWidget](labelframe.htm#M11) [-visual, visual, Visual](labelframe.htm#M12) [-width, width, Width](labelframe.htm#M13) [DESCRIPTION](labelframe.htm#M14) [WIDGET COMMAND](labelframe.htm#M15) [*pathName* **cget** *option*](labelframe.htm#M16) [*pathName* **configure** ?*option*? *?value option value ...*?](labelframe.htm#M17) [BINDINGS](labelframe.htm#M18) [EXAMPLE](labelframe.htm#M19) [SEE ALSO](labelframe.htm#M20) [KEYWORDS](labelframe.htm#M21) Name ---- labelframe — Create and manipulate 'labelframe' labelled container widgets Synopsis -------- **labelframe** *pathName* ?*options*? Standard options ---------------- **[-borderwidth or -bd, borderWidth, BorderWidth](options.htm#M-borderwidth)** **[-cursor, cursor, Cursor](options.htm#M-cursor)** **[-font, font, Font](options.htm#M-font)** **[-foreground or -fg, foreground, Foreground](options.htm#M-foreground)** **[-highlightbackground, highlightBackground, HighlightBackground](options.htm#M-highlightbackground)** **[-highlightcolor, highlightColor, HighlightColor](options.htm#M-highlightcolor)** **[-highlightthickness, highlightThickness, HighlightThickness](options.htm#M-highlightthickness)** **[-padx, padX, Pad](options.htm#M-padx)** **[-pady, padY, Pad](options.htm#M-pady)** **[-relief, relief, Relief](options.htm#M-relief)** **[-takefocus, takeFocus, TakeFocus](options.htm#M-takefocus)** **[-text, text, Text](options.htm#M-text)** Widget-specific options ----------------------- Command-Line Name: **-background** Database Name: **background** Database Class: **Background** This option is the same as the standard **-background** option except that its value may also be specified as an empty string. In this case, the widget will display no background or border, and no colors will be consumed from its colormap for its background and border. Command-Line Name: **-class** Database Name: **class** Database Class: **Class** Specifies a class for the window. This class will be used when querying the option database for the window's other options, and it will also be used later for other purposes such as bindings. The **-class** option may not be changed with the **configure** widget command. Command-Line Name: **-colormap** Database Name: **colormap** Database Class: **Colormap** Specifies a colormap to use for the window. The value may be either **new**, in which case a new colormap is created for the window and its children, or the name of another window (which must be on the same screen and have the same visual as *pathName*), in which case the new window will use the colormap from the specified window. If the **-colormap** option is not specified, the new window uses the same colormap as its parent. This option may not be changed with the **configure** widget command. Command-Line Name: **-height** Database Name: **height** Database Class: **Height** Specifies the desired height for the window in any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**. If this option is less than or equal to zero then the window will not request any size at all. Command-Line Name: **-labelanchor** Database Name: **labelAnchor** Database Class: **LabelAnchor** Specifies where to place the label. A label is only displayed if the **-text** option is not the empty string. Valid values for this option are (listing them clockwise) **nw**, **n**, **ne**, **en**, **e**, **es**, **se**, **s**,**sw**, **ws**, **w** and **wn**. The default value is **nw**. Command-Line Name: **-labelwidget** Database Name: **labelWidget** Database Class: **LabelWidget** Specifies a widget to use as label. This overrides any **-text** option. The widget must exist before being used as **-labelwidget** and if it is not a descendant of this window, it will be raised above it in the stacking order. Command-Line Name: **-visual** Database Name: **visual** Database Class: **Visual** Specifies visual information for the new window in any of the forms accepted by **[Tk\_GetVisual](https://www.tcl.tk/man/tcl/TkLib/GetVisual.htm)**. If this option is not specified, the new window will use the same visual as its parent. The **-visual** option may not be modified with the **configure** widget command. Command-Line Name: **-width** Database Name: **width** Database Class: **Width** Specifies the desired width for the window in any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**. If this option is less than or equal to zero then the window will not request any size at all. Description ----------- The **labelframe** command creates a new window (given by the *pathName* argument) and makes it into a labelframe widget. Additional options, described above, may be specified on the command line or in the option database to configure aspects of the labelframe such as its background color and relief. The **labelframe** command returns the path name of the new window. A labelframe is a simple widget. Its primary purpose is to act as a spacer or container for complex window layouts. It has the features of a **[frame](frame.htm)** plus the ability to display a label. Widget command -------------- The **labelframe** command creates a new Tcl command whose name is the same as the path name of the labelframe's window. This command may be used to invoke various operations on the widget. It has the following general form: ``` *pathName option* ?*arg arg ...*? ``` *PathName* is the name of the command, which is the same as the labelframe widget's path name. *Option* and the *arg*s determine the exact behavior of the command. The following commands are possible for frame widgets: *pathName* **cget** *option* Returns the current value of the configuration option given by *option*. *Option* may have any of the values accepted by the **labelframe** command. *pathName* **configure** ?*option*? *?value option value ...*? Query or modify the configuration options of the widget. If no *option* is specified, returns a list describing all of the available options for *pathName* (see **[Tk\_ConfigureInfo](https://www.tcl.tk/man/tcl/TkLib/ConfigWidg.htm)** for information on the format of this list). If *option* is specified with no *value*, then the command returns a list describing the one named option (this list will be identical to the corresponding sublist of the value returned if no *option* is specified). If one or more *option-value* pairs are specified, then the command modifies the given widget option(s) to have the given value(s); in this case the command returns an empty string. *Option* may have any of the values accepted by the **labelframe** command. Bindings -------- When a new labelframe is created, it has no default event bindings: labelframes are not intended to be interactive. Example ------- This shows how to build part of a GUI for a hamburger vendor. The **labelframe** widgets are used to organize the available choices by the kinds of things that the choices are being made over. ``` grid [**labelframe** .burger -text "Burger"] \ [**labelframe** .bun -text "Bun"] -sticky news grid [**labelframe** .cheese -text "Cheese Option"] \ [**labelframe** .pickle -text "Pickle Option"] -sticky news foreach {type name val} { burger Beef beef burger Lamb lamb burger Vegetarian beans bun Plain white bun Sesame seeds bun Wholemeal brown cheese None none cheese Cheddar cheddar cheese Edam edam cheese Brie brie cheese Gruy\u00e8re gruyere cheese "Monterey Jack" jack pickle None none pickle Gherkins gherkins pickle Onions onion pickle Chili chili } { set w [radiobutton .$type.$val -text $name -anchor w \ -variable $type -value $val] pack $w -side top -fill x } set burger beef set bun white set cheese none set pickle none ``` See also -------- **[frame](frame.htm)**, **[label](label.htm)**, **[ttk::labelframe](ttk_labelframe.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/labelframe.htm>
programming_docs
tcl_tk frame frame ===== [NAME](frame.htm#M2) frame — Create and manipulate 'frame' simple container widgets [SYNOPSIS](frame.htm#M3) [STANDARD OPTIONS](frame.htm#M4) [-borderwidth or -bd, borderWidth, BorderWidth](options.htm#M-borderwidth) [-cursor, cursor, Cursor](options.htm#M-cursor) [-highlightbackground, highlightBackground, HighlightBackground](options.htm#M-highlightbackground) [-highlightcolor, highlightColor, HighlightColor](options.htm#M-highlightcolor) [-highlightthickness, highlightThickness, HighlightThickness](options.htm#M-highlightthickness) [-padx, padX, Pad](options.htm#M-padx) [-pady, padY, Pad](options.htm#M-pady) [-relief, relief, Relief](options.htm#M-relief) [-takefocus, takeFocus, TakeFocus](options.htm#M-takefocus) [WIDGET-SPECIFIC OPTIONS](frame.htm#M5) [-background, background, Background](frame.htm#M6) [-class, class, Class](frame.htm#M7) [-colormap, colormap, Colormap](frame.htm#M8) [-container, container, Container](frame.htm#M9) [-height, height, Height](frame.htm#M10) [-visual, visual, Visual](frame.htm#M11) [-width, width, Width](frame.htm#M12) [DESCRIPTION](frame.htm#M13) [WIDGET COMMAND](frame.htm#M14) [*pathName* **cget** *option*](frame.htm#M15) [*pathName* **configure** ?*option*? *?value option value ...*?](frame.htm#M16) [BINDINGS](frame.htm#M17) [SEE ALSO](frame.htm#M18) [KEYWORDS](frame.htm#M19) Name ---- frame — Create and manipulate 'frame' simple container widgets Synopsis -------- **frame** *pathName* ?*options*? Standard options ---------------- **[-borderwidth or -bd, borderWidth, BorderWidth](options.htm#M-borderwidth)** **[-cursor, cursor, Cursor](options.htm#M-cursor)** **[-highlightbackground, highlightBackground, HighlightBackground](options.htm#M-highlightbackground)** **[-highlightcolor, highlightColor, HighlightColor](options.htm#M-highlightcolor)** **[-highlightthickness, highlightThickness, HighlightThickness](options.htm#M-highlightthickness)** **[-padx, padX, Pad](options.htm#M-padx)** **[-pady, padY, Pad](options.htm#M-pady)** **[-relief, relief, Relief](options.htm#M-relief)** **[-takefocus, takeFocus, TakeFocus](options.htm#M-takefocus)** Widget-specific options ----------------------- Command-Line Name: **-background** Database Name: **background** Database Class: **Background** This option is the same as the standard **-background** option except that its value may also be specified as an empty string. In this case, the widget will display no background or border, and no colors will be consumed from its colormap for its background and border. Command-Line Name: **-class** Database Name: **class** Database Class: **Class** Specifies a class for the window. This class will be used when querying the option database for the window's other options, and it will also be used later for other purposes such as bindings. The **-class** option may not be changed with the **configure** widget command. Command-Line Name: **-colormap** Database Name: **colormap** Database Class: **Colormap** Specifies a colormap to use for the window. The value may be either **new**, in which case a new colormap is created for the window and its children, or the name of another window (which must be on the same screen and have the same visual as *pathName*), in which case the new window will use the colormap from the specified window. If the **-colormap** option is not specified, the new window uses the same colormap as its parent. This option may not be changed with the **configure** widget command. Command-Line Name: **-container** Database Name: **container** Database Class: **Container** The value must be a boolean. If true, it means that this window will be used as a container in which some other application will be embedded (for example, a Tk toplevel can be embedded using the **-use** option). The window will support the appropriate window manager protocols for things like geometry requests. The window should not have any children of its own in this application. This option may not be changed with the **configure** widget command. Note that **-borderwidth**, **-padx** and **-pady** are ignored when configured as a container since a container has no border. Command-Line Name: **-height** Database Name: **height** Database Class: **Height** Specifies the desired height for the window in any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**. If this option is less than or equal to zero then the window will not request any size at all. Note that this sets the total height of the frame, any **-borderwidth** or similar is not added. Normally **-height** should not be used if a propagating geometry manager, such as **[grid](grid.htm)** or **[pack](pack.htm)**, is used within the frame since the geometry manager will override the height of the frame. Command-Line Name: **-visual** Database Name: **visual** Database Class: **Visual** Specifies visual information for the new window in any of the forms accepted by **[Tk\_GetVisual](https://www.tcl.tk/man/tcl/TkLib/GetVisual.htm)**. If this option is not specified, the new window will use the same visual as its parent. The **-visual** option may not be modified with the **configure** widget command. Command-Line Name: **-width** Database Name: **width** Database Class: **Width** Specifies the desired width for the window in any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**. If this option is less than or equal to zero then the window will not request any size at all. Note that this sets the total width of the frame, any **-borderwidth** or similar is not added. Normally **-width** should not be used if a propagating geometry manager, such as **[grid](grid.htm)** or **[pack](pack.htm)**, is used within the frame since the geometry manager will override the width of the frame. Description ----------- The **frame** command creates a new window (given by the *pathName* argument) and makes it into a frame widget. Additional options, described above, may be specified on the command line or in the option database to configure aspects of the frame such as its background color and relief. The **frame** command returns the path name of the new window. A frame is a simple widget. Its primary purpose is to act as a spacer or container for complex window layouts. The only features of a frame are its background color and an optional 3-D border to make the frame appear raised or sunken. Widget command -------------- The **frame** command creates a new Tcl command whose name is the same as the path name of the frame's window. This command may be used to invoke various operations on the widget. It has the following general form: ``` *pathName option* ?*arg arg ...*? ``` *PathName* is the name of the command, which is the same as the frame widget's path name. *Option* and the *arg*s determine the exact behavior of the command. The following commands are possible for frame widgets: *pathName* **cget** *option* Returns the current value of the configuration option given by *option*. *Option* may have any of the values accepted by the **frame** command. *pathName* **configure** ?*option*? *?value option value ...*? Query or modify the configuration options of the widget. If no *option* is specified, returns a list describing all of the available options for *pathName* (see **[Tk\_ConfigureInfo](https://www.tcl.tk/man/tcl/TkLib/ConfigWidg.htm)** for information on the format of this list). If *option* is specified with no *value*, then the command returns a list describing the one named option (this list will be identical to the corresponding sublist of the value returned if no *option* is specified). If one or more *option-value* pairs are specified, then the command modifies the given widget option(s) to have the given value(s); in this case the command returns an empty string. *Option* may have any of the values accepted by the **frame** command. Bindings -------- When a new frame is created, it has no default event bindings: frames are not intended to be interactive. See also -------- **[labelframe](labelframe.htm)**, **[toplevel](toplevel.htm)**, **[ttk::frame](ttk_frame.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/frame.htm> tcl_tk listbox listbox ======= [NAME](listbox.htm#M2) listbox — Create and manipulate 'listbox' item list widgets [SYNOPSIS](listbox.htm#M3) [STANDARD OPTIONS](listbox.htm#M4) [-background or -bg, background, Background](options.htm#M-background) [-borderwidth or -bd, borderWidth, BorderWidth](options.htm#M-borderwidth) [-cursor, cursor, Cursor](options.htm#M-cursor) [-disabledforeground, disabledForeground, DisabledForeground](options.htm#M-disabledforeground) [-exportselection, exportSelection, ExportSelection](options.htm#M-exportselection) [-font, font, Font](options.htm#M-font) [-foreground or -fg, foreground, Foreground](options.htm#M-foreground) [-highlightbackground, highlightBackground, HighlightBackground](options.htm#M-highlightbackground) [-highlightcolor, highlightColor, HighlightColor](options.htm#M-highlightcolor) [-highlightthickness, highlightThickness, HighlightThickness](options.htm#M-highlightthickness) [-justify, justify, Justify](options.htm#M-justify) [-relief, relief, Relief](options.htm#M-relief) [-selectbackground, selectBackground, Foreground](options.htm#M-selectbackground) [-selectborderwidth, selectBorderWidth, BorderWidth](options.htm#M-selectborderwidth) [-selectforeground, selectForeground, Background](options.htm#M-selectforeground) [-setgrid, setGrid, SetGrid](options.htm#M-setgrid) [-takefocus, takeFocus, TakeFocus](options.htm#M-takefocus) [-xscrollcommand, xScrollCommand, ScrollCommand](options.htm#M-xscrollcommand) [-yscrollcommand, yScrollCommand, ScrollCommand](options.htm#M-yscrollcommand) [WIDGET-SPECIFIC OPTIONS](listbox.htm#M5) [-activestyle, activeStyle, ActiveStyle](listbox.htm#M6) [-height, height, Height](listbox.htm#M7) [-listvariable, listVariable, Variable](listbox.htm#M8) [-selectmode, selectMode, SelectMode](listbox.htm#M9) [-state, state, State](listbox.htm#M10) [-width, width, Width](listbox.htm#M11) [DESCRIPTION](listbox.htm#M12) [INDICES](listbox.htm#M13) [*number*](listbox.htm#M14) [**active**](listbox.htm#M15) [**anchor**](listbox.htm#M16) [**end**](listbox.htm#M17) [**@***x***,***y*](listbox.htm#M18) [WIDGET COMMAND](listbox.htm#M19) [*pathName* **activate** *index*](listbox.htm#M20) [*pathName* **bbox** *index*](listbox.htm#M21) [*pathName* **cget** *option*](listbox.htm#M22) [*pathName* **configure** ?*option*? ?*value option value ...*?](listbox.htm#M23) [*pathName* **curselection**](listbox.htm#M24) [*pathName* **delete** *first* ?*last*?](listbox.htm#M25) [*pathName* **get** *first* ?*last*?](listbox.htm#M26) [*pathName* **index** *index*](listbox.htm#M27) [*pathName* **insert** *index* ?*element element ...*?](listbox.htm#M28) [*pathName* **itemcget** *index option*](listbox.htm#M29) [*pathName* **itemconfigure** *index* ?*option*? ?*value*? ?*option value ...*?](listbox.htm#M30) [**-background** *color*](listbox.htm#M31) [**-foreground** *color*](listbox.htm#M32) [**-selectbackground** *color*](listbox.htm#M33) [**-selectforeground** *color*](listbox.htm#M34) [*pathName* **nearest** *y*](listbox.htm#M35) [*pathName* **scan** *option args*](listbox.htm#M36) [*pathName* **scan mark** *x y*](listbox.htm#M37) [*pathName* **scan dragto** *x y*.](listbox.htm#M38) [*pathName* **see** *index*](listbox.htm#M39) [*pathName* **selection** *option arg*](listbox.htm#M40) [*pathName* **selection anchor** *index*](listbox.htm#M41) [*pathName* **selection clear** *first* ?*last*?](listbox.htm#M42) [*pathName* **selection includes** *index*](listbox.htm#M43) [*pathName* **selection set** *first* ?*last*?](listbox.htm#M44) [*pathName* **size**](listbox.htm#M45) [*pathName* **xview** ?*args*](listbox.htm#M46) [*pathName* **xview**](listbox.htm#M47) [*pathName* **xview** *index*](listbox.htm#M48) [*pathName* **xview moveto** *fraction*](listbox.htm#M49) [*pathName* **xview scroll** *number what*](listbox.htm#M50) [*pathName* **yview** ?*args*?](listbox.htm#M51) [*pathName* **yview**](listbox.htm#M52) [*pathName* **yview** *index*](listbox.htm#M53) [*pathName* **yview moveto** *fraction*](listbox.htm#M54) [*pathName* **yview scroll** *number what*](listbox.htm#M55) [DEFAULT BINDINGS](listbox.htm#M56) [SEE ALSO](listbox.htm#M57) [KEYWORDS](listbox.htm#M58) Name ---- listbox — Create and manipulate 'listbox' item list widgets Synopsis -------- **listbox** *pathName* ?*options*? Standard options ---------------- **[-background or -bg, background, Background](options.htm#M-background)** **[-borderwidth or -bd, borderWidth, BorderWidth](options.htm#M-borderwidth)** **[-cursor, cursor, Cursor](options.htm#M-cursor)** **[-disabledforeground, disabledForeground, DisabledForeground](options.htm#M-disabledforeground)** **[-exportselection, exportSelection, ExportSelection](options.htm#M-exportselection)** **[-font, font, Font](options.htm#M-font)** **[-foreground or -fg, foreground, Foreground](options.htm#M-foreground)** **[-highlightbackground, highlightBackground, HighlightBackground](options.htm#M-highlightbackground)** **[-highlightcolor, highlightColor, HighlightColor](options.htm#M-highlightcolor)** **[-highlightthickness, highlightThickness, HighlightThickness](options.htm#M-highlightthickness)** **[-justify, justify, Justify](options.htm#M-justify)** **[-relief, relief, Relief](options.htm#M-relief)** **[-selectbackground, selectBackground, Foreground](options.htm#M-selectbackground)** **[-selectborderwidth, selectBorderWidth, BorderWidth](options.htm#M-selectborderwidth)** **[-selectforeground, selectForeground, Background](options.htm#M-selectforeground)** **[-setgrid, setGrid, SetGrid](options.htm#M-setgrid)** **[-takefocus, takeFocus, TakeFocus](options.htm#M-takefocus)** **[-xscrollcommand, xScrollCommand, ScrollCommand](options.htm#M-xscrollcommand)** **[-yscrollcommand, yScrollCommand, ScrollCommand](options.htm#M-yscrollcommand)** Widget-specific options ----------------------- Command-Line Name: **-activestyle** Database Name: **activeStyle** Database Class: **ActiveStyle** Specifies the style in which to draw the active element. This must be one of **dotbox** (show a focus ring around the active element), **none** (no special indication of active element) or **underline** (underline the active element). The default is **underline** on Windows, and **dotbox** elsewhere. Command-Line Name: **-height** Database Name: **height** Database Class: **Height** Specifies the desired height for the window, in lines. If zero or less, then the desired height for the window is made just large enough to hold all the elements in the listbox. Command-Line Name: **-listvariable** Database Name: **listVariable** Database Class: **[Variable](../tclcmd/variable.htm)** Specifies the name of a global variable. The value of the variable is a list to be displayed inside the widget; if the variable value changes then the widget will automatically update itself to reflect the new value. Attempts to assign a variable with an invalid list value to **-listvariable** will cause an error. Attempts to unset a variable in use as a **-listvariable** will fail but will not generate an error. Command-Line Name: **-selectmode** Database Name: **selectMode** Database Class: **SelectMode** Specifies one of several styles for manipulating the selection. The value of the option may be arbitrary, but the default bindings expect it to be either **single**, **browse**, **multiple**, or **extended**; the default value is **browse**. Command-Line Name: **-state** Database Name: **state** Database Class: **State** Specifies one of two states for the listbox: **normal** or **disabled**. If the listbox is disabled then items may not be inserted or deleted, items are drawn in the **-disabledforeground** color, and selection cannot be modified and is not shown (though selection information is retained). Command-Line Name: **-width** Database Name: **width** Database Class: **Width** Specifies the desired width for the window in characters. If the font does not have a uniform width then the width of the character “0” is used in translating from character units to screen units. If zero or less, then the desired width for the window is made just large enough to hold all the elements in the listbox. Description ----------- The **listbox** command creates a new window (given by the *pathName* argument) and makes it into a listbox widget. Additional options, described above, may be specified on the command line or in the option database to configure aspects of the listbox such as its colors, font, text, and relief. The **listbox** command returns its *pathName* argument. At the time this command is invoked, there must not exist a window named *pathName*, but *pathName*'s parent must exist. A listbox is a widget that displays a list of strings, one per line. When first created, a new listbox has no elements. Elements may be added or deleted using widget commands described below. In addition, one or more elements may be selected as described below. If a listbox is exporting its selection (see **-exportselection** option), then it will observe the standard X11 protocols for handling the selection. Listbox selections are available as type **[STRING](../tclcmd/string.htm)**; the value of the selection will be the text of the selected elements, with newlines separating the elements. It is not necessary for all the elements to be displayed in the listbox window at once; commands described below may be used to change the view in the window. Listboxes allow scrolling in both directions using the standard **-xscrollcommand** and **-yscrollcommand** options. They also support scanning, as described below. Indices ------- Many of the widget commands for listboxes take one or more indices as arguments. An index specifies a particular element of the listbox, in any of the following ways: *number* Specifies the element as a numerical index, where 0 corresponds to the first element in the listbox. **active** Indicates the element that has the location cursor. This element will be displayed as specified by **-activestyle** when the listbox has the keyboard focus, and it is specified with the **activate** widget command. **anchor** Indicates the anchor point for the selection, which is set with the **[selection anchor](selection.htm)** widget command. **end** Indicates the end of the listbox. For most commands this refers to the last element in the listbox, but for a few commands such as **index** and **insert** it refers to the element just after the last one. **@***x***,***y* Indicates the element that covers the point in the listbox window specified by *x* and *y* (in pixel coordinates). If no element covers that point, then the closest element to that point is used. In the widget command descriptions below, arguments named *index*, *first*, and *last* always contain text indices in one of the above forms. Widget command -------------- The **listbox** command creates a new Tcl command whose name is *pathName*. This command may be used to invoke various operations on the widget. It has the following general form: ``` *pathName option* ?*arg arg ...*? ``` *Option* and the *arg*s determine the exact behavior of the command. The following commands are possible for listbox widgets: *pathName* **activate** *index* Sets the active element to the one indicated by *index*. If *index* is outside the range of elements in the listbox then the closest element is activated. The active element is drawn as specified by **-activestyle** when the widget has the input focus, and its index may be retrieved with the index **active**. *pathName* **bbox** *index* Returns a list of four numbers describing the bounding box of the text in the element given by *index*. The first two elements of the list give the x and y coordinates of the upper-left corner of the screen area covered by the text (specified in pixels relative to the widget) and the last two elements give the width and height of the area, in pixels. If no part of the element given by *index* is visible on the screen, or if *index* refers to a non-existent element, then the result is an empty string; if the element is partially visible, the result gives the full area of the element, including any parts that are not visible. *pathName* **cget** *option* Returns the current value of the configuration option given by *option*. *Option* may have any of the values accepted by the **listbox** command. *pathName* **configure** ?*option*? ?*value option value ...*? Query or modify the configuration options of the widget. If no *option* is specified, returns a list describing all of the available options for *pathName* (see **[Tk\_ConfigureInfo](https://www.tcl.tk/man/tcl/TkLib/ConfigWidg.htm)** for information on the format of this list). If *option* is specified with no *value*, then the command returns a list describing the one named option (this list will be identical to the corresponding sublist of the value returned if no *option* is specified). If one or more *option-value* pairs are specified, then the command modifies the given widget option(s) to have the given value(s); in this case the command returns an empty string. *Option* may have any of the values accepted by the **listbox** command. *pathName* **curselection** Returns a list containing the numerical indices of all of the elements in the listbox that are currently selected. If there are no elements selected in the listbox then an empty string is returned. *pathName* **delete** *first* ?*last*? Deletes one or more elements of the listbox. *First* and *last* are indices specifying the first and last elements in the range to delete. If *last* is not specified it defaults to *first*, i.e. a single element is deleted. *pathName* **get** *first* ?*last*? If *last* is omitted, returns the contents of the listbox element indicated by *first*, or an empty string if *first* refers to a non-existent element. If *last* is specified, the command returns a list whose elements are all of the listbox elements between *first* and *last*, inclusive. Both *first* and *last* may have any of the standard forms for indices. *pathName* **index** *index* Returns the integer index value that corresponds to *index*. If *index* is **end** the return value is a count of the number of elements in the listbox (not the index of the last element). *pathName* **insert** *index* ?*element element ...*? Inserts zero or more new elements in the list just before the element given by *index*. If *index* is specified as **end** then the new elements are added to the end of the list. Returns an empty string. *pathName* **itemcget** *index option* Returns the current value of the item configuration option given by *option*. *Option* may have any of the values accepted by the **itemconfigure** command. *pathName* **itemconfigure** *index* ?*option*? ?*value*? ?*option value ...*? Query or modify the configuration options of an item in the listbox. If no *option* is specified, returns a list describing all of the available options for the item (see **[Tk\_ConfigureInfo](https://www.tcl.tk/man/tcl/TkLib/ConfigWidg.htm)** for information on the format of this list). If *option* is specified with no *value*, then the command returns a list describing the one named option (this list will be identical to the corresponding sublist of the value returned if no *option* is specified). If one or more *option-value* pairs are specified, then the command modifies the given widget option(s) to have the given value(s); in this case the command returns an empty string. The following options are currently supported for items: **-background** *color* *Color* specifies the background color to use when displaying the item. It may have any of the forms accepted by **[Tk\_GetColor](https://www.tcl.tk/man/tcl/TkLib/GetColor.htm)**. **-foreground** *color* *Color* specifies the foreground color to use when displaying the item. It may have any of the forms accepted by **[Tk\_GetColor](https://www.tcl.tk/man/tcl/TkLib/GetColor.htm)**. **-selectbackground** *color* *color* specifies the background color to use when displaying the item while it is selected. It may have any of the forms accepted by **[Tk\_GetColor](https://www.tcl.tk/man/tcl/TkLib/GetColor.htm)**. **-selectforeground** *color* *color* specifies the foreground color to use when displaying the item while it is selected. It may have any of the forms accepted by **[Tk\_GetColor](https://www.tcl.tk/man/tcl/TkLib/GetColor.htm)**. *pathName* **nearest** *y* Given a y-coordinate within the listbox window, this command returns the index of the (visible) listbox element nearest to that y-coordinate. *pathName* **scan** *option args* This command is used to implement scanning on listboxes. It has two forms, depending on *option*: *pathName* **scan mark** *x y* Records *x* and *y* and the current view in the listbox window; used in conjunction with later **scan dragto** commands. Typically this command is associated with a mouse button press in the widget. It returns an empty string. *pathName* **scan dragto** *x y*. This command computes the difference between its *x* and *y* arguments and the *x* and *y* arguments to the last **scan mark** command for the widget. It then adjusts the view by 10 times the difference in coordinates. This command is typically associated with mouse motion events in the widget, to produce the effect of dragging the list at high speed through the window. The return value is an empty string. *pathName* **see** *index* Adjust the view in the listbox so that the element given by *index* is visible. If the element is already visible then the command has no effect; if the element is near one edge of the window then the listbox scrolls to bring the element into view at the edge; otherwise the listbox scrolls to center the element. *pathName* **selection** *option arg* This command is used to adjust the selection within a listbox. It has several forms, depending on *option*: *pathName* **selection anchor** *index* Sets the selection anchor to the element given by *index*. If *index* refers to a non-existent element, then the closest element is used. The selection anchor is the end of the selection that is fixed while dragging out a selection with the mouse. The index **anchor** may be used to refer to the anchor element. *pathName* **selection clear** *first* ?*last*? If any of the elements between *first* and *last* (inclusive) are selected, they are deselected. The selection state is not changed for elements outside this range. *pathName* **selection includes** *index* Returns 1 if the element indicated by *index* is currently selected, 0 if it is not. *pathName* **selection set** *first* ?*last*? Selects all of the elements in the range between *first* and *last*, inclusive, without affecting the selection state of elements outside that range. *pathName* **size** Returns a decimal string indicating the total number of elements in the listbox. *pathName* **xview** ?*args* This command is used to query and change the horizontal position of the information in the widget's window. It can take any of the following forms: *pathName* **xview** Returns a list containing two elements. Each element is a real fraction between 0 and 1; together they describe the horizontal span that is visible in the window. For example, if the first element is .2 and the second element is .6, 20% of the listbox's text is off-screen to the left, the middle 40% is visible in the window, and 40% of the text is off-screen to the right. These are the same values passed to scrollbars via the **-xscrollcommand** option. *pathName* **xview** *index* Adjusts the view in the window so that the character position given by *index* is displayed at the left edge of the window. Character positions are defined by the width of the character **0**. *pathName* **xview moveto** *fraction* Adjusts the view in the window so that *fraction* of the total width of the listbox text is off-screen to the left. *fraction* must be a fraction between 0 and 1. *pathName* **xview scroll** *number what* This command shifts the view in the window left or right according to *number* and *what*. *Number* must be an integer. *What* must be either **units** or **pages** or an abbreviation of one of these. If *what* is **units**, the view adjusts left or right by *number* character units (the width of the **0** character) on the display; if it is **pages** then the view adjusts by *number* screenfuls. If *number* is negative then characters farther to the left become visible; if it is positive then characters farther to the right become visible. *pathName* **yview** ?*args*? This command is used to query and change the vertical position of the text in the widget's window. It can take any of the following forms: *pathName* **yview** Returns a list containing two elements, both of which are real fractions between 0 and 1. The first element gives the position of the listbox element at the top of the window, relative to the listbox as a whole (0.5 means it is halfway through the listbox, for example). The second element gives the position of the listbox element just after the last one in the window, relative to the listbox as a whole. These are the same values passed to scrollbars via the **-yscrollcommand** option. *pathName* **yview** *index* Adjusts the view in the window so that the element given by *index* is displayed at the top of the window. *pathName* **yview moveto** *fraction* Adjusts the view in the window so that the element given by *fraction* appears at the top of the window. *Fraction* is a fraction between 0 and 1; 0 indicates the first element in the listbox, 0.33 indicates the element one-third the way through the listbox, and so on. *pathName* **yview scroll** *number what* This command adjusts the view in the window up or down according to *number* and *what*. *Number* must be an integer. *What* must be either **units** or **pages**. If *what* is **units**, the view adjusts up or down by *number* lines; if it is **pages** then the view adjusts by *number* screenfuls. If *number* is negative then earlier elements become visible; if it is positive then later elements become visible. Default bindings ---------------- Tk automatically creates class bindings for listboxes that give them Motif-like behavior. Much of the behavior of a listbox is determined by its **-selectmode** option, which selects one of four ways of dealing with the selection. If the selection mode is **single** or **browse**, at most one element can be selected in the listbox at once. In both modes, clicking button 1 on an element selects it and deselects any other selected item. In **browse** mode it is also possible to drag the selection with button 1. On button 1, the listbox will also take focus if it has a **normal** state. If the selection mode is **multiple** or **extended**, any number of elements may be selected at once, including discontiguous ranges. In **multiple** mode, clicking button 1 on an element toggles its selection state without affecting any other elements. In **extended** mode, pressing button 1 on an element selects it, deselects everything else, and sets the anchor to the element under the mouse; dragging the mouse with button 1 down extends the selection to include all the elements between the anchor and the element under the mouse, inclusive. Most people will probably want to use **browse** mode for single selections and **extended** mode for multiple selections; the other modes appear to be useful only in special situations. Any time the set of selected item(s) in the listbox is updated by the user through the keyboard or mouse, the virtual event **<<ListboxSelect>>** will be generated. This virtual event will not be generated when adjusting the selection with the *pathName* **[selection](selection.htm)** command. It is easiest to bind to this event to be made aware of any user changes to listbox selection. In addition to the above behavior, the following additional behavior is defined by the default bindings: 1. In **extended** mode, the selected range can be adjusted by pressing button 1 with the Shift key down: this modifies the selection to consist of the elements between the anchor and the element under the mouse, inclusive. The un-anchored end of this new selection can also be dragged with the button down. 2. In **extended** mode, pressing button 1 with the Control key down starts a toggle operation: the anchor is set to the element under the mouse, and its selection state is reversed. The selection state of other elements is not changed. If the mouse is dragged with button 1 down, then the selection state of all elements between the anchor and the element under the mouse is set to match that of the anchor element; the selection state of all other elements remains what it was before the toggle operation began. 3. If the mouse leaves the listbox window with button 1 down, the window scrolls away from the mouse, making information visible that used to be off-screen on the side of the mouse. The scrolling continues until the mouse re-enters the window, the button is released, or the end of the listbox is reached. 4. Mouse button 2 may be used for scanning. If it is pressed and dragged over the listbox, the contents of the listbox drag at high speed in the direction the mouse moves. 5. If the Up or Down key is pressed, the location cursor (active element) moves up or down one element. If the selection mode is **browse** or **extended** then the new active element is also selected and all other elements are deselected. In **extended** mode the new active element becomes the selection anchor. 6. In **extended** mode, Shift-Up and Shift-Down move the location cursor (active element) up or down one element and also extend the selection to that element in a fashion similar to dragging with mouse button 1. 7. The Left and Right keys scroll the listbox view left and right by the width of the character **0**. Control-Left and Control-Right scroll the listbox view left and right by the width of the window. Control-Prior and Control-Next also scroll left and right by the width of the window. 8. The Prior and Next keys scroll the listbox view up and down by one page (the height of the window). 9. The Home and End keys scroll the listbox horizontally to the left and right edges, respectively. 10. Control-Home sets the location cursor to the first element in the listbox, selects that element, and deselects everything else in the listbox. 11. Control-End sets the location cursor to the last element in the listbox, selects that element, and deselects everything else in the listbox. 12. In **extended** mode, Control-Shift-Home extends the selection to the first element in the listbox and Control-Shift-End extends the selection to the last element. 13. In **multiple** mode, Control-Shift-Home moves the location cursor to the first element in the listbox and Control-Shift-End moves the location cursor to the last element. 14. The space and Select keys make a selection at the location cursor (active element) just as if mouse button 1 had been pressed over this element. 15. In **extended** mode, Control-Shift-space and Shift-Select extend the selection to the active element just as if button 1 had been pressed with the Shift key down. 16. In **extended** mode, the Escape key cancels the most recent selection and restores all the elements in the selected range to their previous selection state. 17. Control-slash selects everything in the widget, except in **single** and **browse** modes, in which case it selects the active element and deselects everything else. 18. Control-backslash deselects everything in the widget, except in **browse** mode where it has no effect. 19. The F16 key (labelled Copy on many Sun workstations) or Meta-w copies the selection in the widget to the clipboard, if there is a selection. The behavior of listboxes can be changed by defining new bindings for individual widgets or by redefining the class bindings. See also -------- **[ttk::treeview](ttk_treeview.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/listbox.htm>
programming_docs
tcl_tk ttk_label ttk\_label ========== [NAME](ttk_label.htm#M2) ttk::label — Display a text string and/or image [SYNOPSIS](ttk_label.htm#M3) [DESCRIPTION](ttk_label.htm#M4) [STANDARD OPTIONS](ttk_label.htm#M5) [-class, undefined, undefined](ttk_widget.htm#M-class) [-compound, compound, Compound](ttk_widget.htm#M-compound) [-cursor, cursor, Cursor](ttk_widget.htm#M-cursor) [-image, image, Image](ttk_widget.htm#M-image) [-style, style, Style](ttk_widget.htm#M-style) [-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus) [-text, text, Text](ttk_widget.htm#M-text) [-textvariable, textVariable, Variable](ttk_widget.htm#M-textvariable) [-underline, underline, Underline](ttk_widget.htm#M-underline) [-width, width, Width](ttk_widget.htm#M-width) [WIDGET-SPECIFIC OPTIONS](ttk_label.htm#M6) [-anchor, anchor, Anchor](ttk_label.htm#M7) [-background, frameColor, FrameColor](ttk_label.htm#M8) [-font, font, Font](ttk_label.htm#M9) [-foreground, textColor, TextColor](ttk_label.htm#M10) [-justify, justify, Justify](ttk_label.htm#M11) [-padding, padding, Padding](ttk_label.htm#M12) [-relief, relief, Relief](ttk_label.htm#M13) [-text, text, Text](ttk_label.htm#M14) [-wraplength, wrapLength, WrapLength](ttk_label.htm#M15) [WIDGET COMMAND](ttk_label.htm#M16) [SEE ALSO](ttk_label.htm#M17) Name ---- ttk::label — Display a text string and/or image Synopsis -------- **ttk::label** *pathName* ?*options*? Description ----------- A **ttk::label** widget displays a textual label and/or image. The label may be linked to a Tcl variable to automatically change the displayed text. Standard options ---------------- **[-class, undefined, undefined](ttk_widget.htm#M-class)** **[-compound, compound, Compound](ttk_widget.htm#M-compound)** **[-cursor, cursor, Cursor](ttk_widget.htm#M-cursor)** **[-image, image, Image](ttk_widget.htm#M-image)** **[-style, style, Style](ttk_widget.htm#M-style)** **[-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus)** **[-text, text, Text](ttk_widget.htm#M-text)** **[-textvariable, textVariable, Variable](ttk_widget.htm#M-textvariable)** **[-underline, underline, Underline](ttk_widget.htm#M-underline)** **[-width, width, Width](ttk_widget.htm#M-width)** Widget-specific options ----------------------- Command-Line Name: **-anchor** Database Name: **anchor** Database Class: **Anchor** Specifies how the information in the widget is positioned relative to the inner margins. Legal values are **n**, **ne**, **e**, **se**, **s**, **sw**, **w**, **nw**, and **center**. See also **-justify**. Command-Line Name: **-background** Database Name: **frameColor** Database Class: **FrameColor** The widget's background color. If unspecified, the theme default is used. Command-Line Name: **-font** Database Name: **font** Database Class: **Font** Font to use for label text. Command-Line Name: **-foreground** Database Name: **textColor** Database Class: **TextColor** The widget's foreground color. If unspecified, the theme default is used. Command-Line Name: **-justify** Database Name: **justify** Database Class: **Justify** If there are multiple lines of text, specifies how the lines are laid out relative to one another. One of **left**, **center**, or **right**. See also **-anchor**. Command-Line Name: **-padding** Database Name: **padding** Database Class: **Padding** Specifies the amount of extra space to allocate for the widget. The padding is a list of up to four length specifications *left top right bottom*. If fewer than four elements are specified, *bottom* defaults to *top*, *right* defaults to *left*, and *top* defaults to *left*. Command-Line Name: **-relief** Database Name: **relief** Database Class: **Relief** Specifies the 3-D effect desired for the widget border. Valid values are **flat**, **groove**, **raised**, **ridge**, **solid**, and **sunken**. Command-Line Name: **-text** Database Name: **text** Database Class: **Text** Specifies a text string to be displayed inside the widget (unless overridden by **-textvariable**). Command-Line Name: **-wraplength** Database Name: **wrapLength** Database Class: **WrapLength** Specifies the maximum line length (in pixels). If this option is less than or equal to zero, then automatic wrapping is not performed; otherwise the text is split into lines such that no line is longer than the specified value. Widget command -------------- Supports the standard widget commands **configure**, **cget**, **identify**, **instate**, and **state**; see *ttk::widget(n)*. See also -------- **[ttk::widget](ttk_widget.htm)**, **[label](label.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/ttk_label.htm> tcl_tk ttk_image ttk\_image ========== [NAME](ttk_image.htm#M2) ttk\_image — Define an element based on an image [SYNOPSIS](ttk_image.htm#M3) [DESCRIPTION](ttk_image.htm#M4) [OPTIONS](ttk_image.htm#M5) [**-border** *padding*](ttk_image.htm#M6) [**-height** *height*](ttk_image.htm#M7) [**-padding** *padding*](ttk_image.htm#M8) [**-sticky** *spec*](ttk_image.htm#M9) [**-width** *width*](ttk_image.htm#M10) [IMAGE STRETCHING](ttk_image.htm#M11) [EXAMPLE](ttk_image.htm#M12) [SEE ALSO](ttk_image.htm#M13) [KEYWORDS](ttk_image.htm#M14) Name ---- ttk\_image — Define an element based on an image Synopsis -------- **ttk::style element create** *name* **[image](image.htm)** *imageSpec* ?*options*? Description ----------- The *image* element factory creates a new element in the current theme whose visual appearance is determined by Tk images. *imageSpec* is a list of one or more elements. The first element is the default image name. The rest of the list is a sequence of *statespec / value* pairs specifying other images to use when the element is in a particular state or combination of states. Options ------- Valid *options* are: **-border** *padding* *padding* is a list of up to four integers, specifying the left, top, right, and bottom borders, respectively. See **[IMAGE STRETCHING](#M11)**, below. **-height** *height* Specifies a minimum height for the element. If less than zero, the base image's height is used as a default. **-padding** *padding* Specifies the element's interior padding. Defaults to **-border** if not specified. **-sticky** *spec* Specifies how the image is placed within the final parcel. *spec* contains zero or more characters “n”, “s”, “w”, or “e”. **-width** *width* Specifies a minimum width for the element. If less than zero, the base image's width is used as a default. Image stretching ---------------- If the element's allocated parcel is larger than the image, the image will be placed in the parcel based on the **-sticky** option. If the image needs to stretch horizontally (i.e., **-sticky ew**) or vertically (**-sticky ns**), subregions of the image are replicated to fill the parcel based on the **-border** option. The **-border** divides the image into 9 regions: four fixed corners, top and left edges (which may be tiled horizontally), left and right edges (which may be tiled vertically), and the central area (which may be tiled in both directions). Example ------- ``` set img1 [image create photo -file button.png] set img2 [image create photo -file button-pressed.png] set img3 [image create photo -file button-active.png] style element create Button.button image \ [list $img1 pressed $img2 active $img3] \ -border {2 4} -sticky we ``` See also -------- **[ttk::intro](ttk_intro.htm)**, **[ttk::style](ttk_style.htm)**, **[ttk\_vsapi](ttk_vsapi.htm)**, **[image](image.htm)**, **[photo](photo.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/ttk_image.htm> tcl_tk focus focus ===== [NAME](focus.htm#M2) focus — Manage the input focus [SYNOPSIS](focus.htm#M3) [DESCRIPTION](focus.htm#M4) [**focus**](focus.htm#M5) [**focus** *window*](focus.htm#M6) [**focus -displayof** *window*](focus.htm#M7) [**focus -force** *window*](focus.htm#M8) [**focus -lastfor** *window*](focus.htm#M9) [QUIRKS](focus.htm#M10) [EXAMPLE](focus.htm#M11) [KEYWORDS](focus.htm#M12) Name ---- focus — Manage the input focus Synopsis -------- **focus** **focus** *window* **focus** *option* ?*arg arg ...*? Description ----------- The **focus** command is used to manage the Tk input focus. At any given time, one window on each display is designated as the *focus window*; any key press or key release events for the display are sent to that window. It is normally up to the window manager to redirect the focus among the top-level windows of a display. For example, some window managers automatically set the input focus to a top-level window whenever the mouse enters it; others redirect the input focus only when the user clicks on a window. Usually the window manager will set the focus only to top-level windows, leaving it up to the application to redirect the focus among the children of the top-level. Tk remembers one focus window for each top-level (the most recent descendant of that top-level to receive the focus); when the window manager gives the focus to a top-level, Tk automatically redirects it to the remembered window. Within a top-level Tk uses an *explicit* focus model by default. Moving the mouse within a top-level does not normally change the focus; the focus changes only when a widget decides explicitly to claim the focus (e.g., because of a button click), or when the user types a key such as Tab that moves the focus. The Tcl procedure **tk\_focusFollowsMouse** may be invoked to create an *implicit* focus model: it reconfigures Tk so that the focus is set to a window whenever the mouse enters it. The Tcl procedures **tk\_focusNext** and **tk\_focusPrev** implement a focus order among the windows of a top-level; they are used in the default bindings for Tab and Shift-Tab, among other things. The **focus** command can take any of the following forms: **focus** Returns the path name of the focus window on the display containing the application's main window, or an empty string if no window in this application has the focus on that display. Note: it is better to specify the display explicitly using **-displayof** (see below) so that the code will work in applications using multiple displays. **focus** *window* If the application currently has the input focus on *window*'s display, this command resets the input focus for *window*'s display to *window* and returns an empty string. If the application does not currently have the input focus on *window*'s display, *window* will be remembered as the focus for its top-level; the next time the focus arrives at the top-level, Tk will redirect it to *window*. If *window* is an empty string then the command does nothing. **focus -displayof** *window* Returns the name of the focus window on the display containing *window*. If the focus window for *window*'s display is not in this application, the return value is an empty string. **focus -force** *window* Sets the focus of *window*'s display to *window*, even if the application does not currently have the input focus for the display. This command should be used sparingly, if at all. In normal usage, an application should not claim the focus for itself; instead, it should wait for the window manager to give it the focus. If *window* is an empty string then the command does nothing. **focus -lastfor** *window* Returns the name of the most recent window to have the input focus among all the windows in the same top-level as *window*. If no window in that top-level has ever had the input focus, or if the most recent focus window has been deleted, then the name of the top-level is returned. The return value is the window that will receive the input focus the next time the window manager gives the focus to the top-level. Quirks ------ When an internal window receives the input focus, Tk does not actually set the X focus to that window; as far as X is concerned, the focus will stay on the top-level window containing the window with the focus. However, Tk generates FocusIn and FocusOut events just as if the X focus were on the internal window. This approach gets around a number of problems that would occur if the X focus were actually moved; the fact that the X focus is on the top-level is invisible unless you use C code to query the X server directly. Example ------- To make a window that only participates in the focus traversal ring when a variable is set, add the following bindings to the widgets *before* and *after* it in that focus ring: ``` button .before -text "Before" button .middle -text "Middle" button .after -text "After" checkbutton .flag -variable traverseToMiddle -takefocus 0 pack .flag -side left pack .before .middle .after bind .before <Tab> { if {!$traverseToMiddle} { **focus** .after break } } bind .after <Shift-Tab> { if {!$traverseToMiddle} { **focus** .before break } } **focus** .before ``` Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/focus.htm> tcl_tk button button ====== [NAME](button.htm#M2) button — Create and manipulate 'button' action widgets [SYNOPSIS](button.htm#M3) [STANDARD OPTIONS](button.htm#M4) [-activebackground, activeBackground, Foreground](options.htm#M-activebackground) [-activeforeground, activeForeground, Background](options.htm#M-activeforeground) [-anchor, anchor, Anchor](options.htm#M-anchor) [-background or -bg, background, Background](options.htm#M-background) [-bitmap, bitmap, Bitmap](options.htm#M-bitmap) [-borderwidth or -bd, borderWidth, BorderWidth](options.htm#M-borderwidth) [-compound, compound, Compound](options.htm#M-compound) [-cursor, cursor, Cursor](options.htm#M-cursor) [-disabledforeground, disabledForeground, DisabledForeground](options.htm#M-disabledforeground) [-font, font, Font](options.htm#M-font) [-foreground or -fg, foreground, Foreground](options.htm#M-foreground) [-highlightbackground, highlightBackground, HighlightBackground](options.htm#M-highlightbackground) [-highlightcolor, highlightColor, HighlightColor](options.htm#M-highlightcolor) [-highlightthickness, highlightThickness, HighlightThickness](options.htm#M-highlightthickness) [-image, image, Image](options.htm#M-image) [-justify, justify, Justify](options.htm#M-justify) [-padx, padX, Pad](options.htm#M-padx) [-pady, padY, Pad](options.htm#M-pady) [-relief, relief, Relief](options.htm#M-relief) [-repeatdelay, repeatDelay, RepeatDelay](options.htm#M-repeatdelay) [-repeatinterval, repeatInterval, RepeatInterval](options.htm#M-repeatinterval) [-takefocus, takeFocus, TakeFocus](options.htm#M-takefocus) [-text, text, Text](options.htm#M-text) [-textvariable, textVariable, Variable](options.htm#M-textvariable) [-underline, underline, Underline](options.htm#M-underline) [-wraplength, wrapLength, WrapLength](options.htm#M-wraplength) [WIDGET-SPECIFIC OPTIONS](button.htm#M5) [-command, command, Command](button.htm#M6) [-default, default, Default](button.htm#M7) [-height, height, Height](button.htm#M8) [-overrelief, overRelief, OverRelief](button.htm#M9) [-state, state, State](button.htm#M10) [-width, width, Width](button.htm#M11) [DESCRIPTION](button.htm#M12) [WIDGET COMMAND](button.htm#M13) [*pathName* **cget** *option*](button.htm#M14) [*pathName* **configure** ?*option*? ?*value option value ...*?](button.htm#M15) [*pathName* **flash**](button.htm#M16) [*pathName* **invoke**](button.htm#M17) [DEFAULT BINDINGS](button.htm#M18) [PLATFORM NOTES](button.htm#M19) [EXAMPLES](button.htm#M20) [SEE ALSO](button.htm#M21) [KEYWORDS](button.htm#M22) Name ---- button — Create and manipulate 'button' action widgets Synopsis -------- **button** *pathName* ?*options*? Standard options ---------------- **[-activebackground, activeBackground, Foreground](options.htm#M-activebackground)** **[-activeforeground, activeForeground, Background](options.htm#M-activeforeground)** **[-anchor, anchor, Anchor](options.htm#M-anchor)** **[-background or -bg, background, Background](options.htm#M-background)** **[-bitmap, bitmap, Bitmap](options.htm#M-bitmap)** **[-borderwidth or -bd, borderWidth, BorderWidth](options.htm#M-borderwidth)** **[-compound, compound, Compound](options.htm#M-compound)** **[-cursor, cursor, Cursor](options.htm#M-cursor)** **[-disabledforeground, disabledForeground, DisabledForeground](options.htm#M-disabledforeground)** **[-font, font, Font](options.htm#M-font)** **[-foreground or -fg, foreground, Foreground](options.htm#M-foreground)** **[-highlightbackground, highlightBackground, HighlightBackground](options.htm#M-highlightbackground)** **[-highlightcolor, highlightColor, HighlightColor](options.htm#M-highlightcolor)** **[-highlightthickness, highlightThickness, HighlightThickness](options.htm#M-highlightthickness)** **[-image, image, Image](options.htm#M-image)** **[-justify, justify, Justify](options.htm#M-justify)** **[-padx, padX, Pad](options.htm#M-padx)** **[-pady, padY, Pad](options.htm#M-pady)** **[-relief, relief, Relief](options.htm#M-relief)** **[-repeatdelay, repeatDelay, RepeatDelay](options.htm#M-repeatdelay)** **[-repeatinterval, repeatInterval, RepeatInterval](options.htm#M-repeatinterval)** **[-takefocus, takeFocus, TakeFocus](options.htm#M-takefocus)** **[-text, text, Text](options.htm#M-text)** **[-textvariable, textVariable, Variable](options.htm#M-textvariable)** **[-underline, underline, Underline](options.htm#M-underline)** **[-wraplength, wrapLength, WrapLength](options.htm#M-wraplength)** Widget-specific options ----------------------- Command-Line Name: **-command** Database Name: **command** Database Class: **Command** Specifies a Tcl command to associate with the button. This command is typically invoked when mouse button 1 is released over the button window. Command-Line Name: **-default** Database Name: **default** Database Class: **Default** Specifies one of three states for the default ring: **normal**, **active**, or **disabled**. In active state, the button is drawn with the platform specific appearance for a default button. In normal state, the button is drawn with the platform specific appearance for a non-default button, leaving enough space to draw the default button appearance. The normal and active states will result in buttons of the same size. In disabled state, the button is drawn with the non-default button appearance without leaving space for the default appearance. The disabled state may result in a smaller button than the active state. Command-Line Name: **-height** Database Name: **height** Database Class: **Height** Specifies a desired height for the button. If an image or bitmap is being displayed in the button then the value is in screen units (i.e. any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**); for text it is in lines of text. If this option is not specified, the button's desired height is computed from the size of the image or bitmap or text being displayed in it. Command-Line Name: **-overrelief** Database Name: **overRelief** Database Class: **OverRelief** Specifies an alternative relief for the button, to be used when the mouse cursor is over the widget. This option can be used to make toolbar buttons, by configuring **-relief flat -overrelief raised**. If the value of this option is the empty string, then no alternative relief is used when the mouse cursor is over the button. The empty string is the default value. Command-Line Name: **-state** Database Name: **state** Database Class: **State** Specifies one of three states for the button: **normal**, **active**, or **disabled**. In normal state the button is displayed using the **-foreground** and **-background** options. The active state is typically used when the pointer is over the button. In active state the button is displayed using the **-activeforeground** and **-activebackground** options. Disabled state means that the button should be insensitive: the default bindings will refuse to activate the widget and will ignore mouse button presses. In this state the **-disabledforeground** and **-background** options determine how the button is displayed. Command-Line Name: **-width** Database Name: **width** Database Class: **Width** Specifies a desired width for the button. If an image or bitmap is being displayed in the button then the value is in screen units (i.e. any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**). For a text button (no image or with **-compound none**) then the width specifies how much space in characters to allocate for the text label. If the width is negative then this specifies a minimum width. If this option is not specified, the button's desired width is computed from the size of the image or bitmap or text being displayed in it. Description ----------- The **button** command creates a new window (given by the *pathName* argument) and makes it into a button widget. Additional options, described above, may be specified on the command line or in the option database to configure aspects of the button such as its colors, font, text, and initial relief. The **button** command returns its *pathName* argument. At the time this command is invoked, there must not exist a window named *pathName*, but *pathName*'s parent must exist. A button is a widget that displays a textual string, bitmap or image. If text is displayed, it must all be in a single font, but it can occupy multiple lines on the screen (if it contains newlines or if wrapping occurs because of the **-wraplength** option) and one of the characters may optionally be underlined using the **-underline** option. It can display itself in either of three different ways, according to the **-state** option; it can be made to appear raised, sunken, or flat; and it can be made to flash. When a user invokes the button (by pressing mouse button 1 with the cursor over the button), then the Tcl command specified in the **-command** option is invoked. Widget command -------------- The **button** command creates a new Tcl command whose name is *pathName*. This command may be used to invoke various operations on the widget. It has the following general form: ``` *pathName option* ?*arg arg ...*? ``` *Option* and the *arg*s determine the exact behavior of the command. The following commands are possible for button widgets: *pathName* **cget** *option* Returns the current value of the configuration option given by *option*. *Option* may have any of the values accepted by the **button** command. *pathName* **configure** ?*option*? ?*value option value ...*? Query or modify the configuration options of the widget. If no *option* is specified, returns a list describing all of the available options for *pathName* (see **[Tk\_ConfigureInfo](https://www.tcl.tk/man/tcl/TkLib/ConfigWidg.htm)** for information on the format of this list). If *option* is specified with no *value*, then the command returns a list describing the one named option (this list will be identical to the corresponding sublist of the value returned if no *option* is specified). If one or more *option-value* pairs are specified, then the command modifies the given widget option(s) to have the given value(s); in this case the command returns an empty string. *Option* may have any of the values accepted by the **button** command. *pathName* **flash** Flash the button. This is accomplished by redisplaying the button several times, alternating between the configured activebackground and background colors. At the end of the flash the button is left in the same normal/active state as when the command was invoked. This command is ignored if the button's state is **disabled**. *pathName* **invoke** Invoke the Tcl command associated with the button, if there is one. The return value is the return value from the Tcl command, or an empty string if there is no command associated with the button. This command is ignored if the button's state is **disabled**. Default bindings ---------------- Tk automatically creates class bindings for buttons that give them default behavior: 1. A button activates whenever the mouse passes over it and deactivates whenever the mouse leaves the button. Under Windows, this binding is only active when mouse button 1 has been pressed over the button. 2. A button's relief is changed to sunken whenever mouse button 1 is pressed over the button, and the relief is restored to its original value when button 1 is later released. 3. If mouse button 1 is pressed over a button and later released over the button, the button is invoked. However, if the mouse is not over the button when button 1 is released, then no invocation occurs. 4. When a button has the input focus, the space key causes the button to be invoked. If the button's state is **disabled** then none of the above actions occur: the button is completely non-responsive. The behavior of buttons can be changed by defining new bindings for individual widgets or by redefining the class bindings. Platform notes -------------- On Aqua/Mac OS X, some configuration options are ignored for the purpose of drawing of the widget because they would otherwise conflict with platform guidelines. The **configure** and **cget** subcommands can still manipulate the values, but do not cause any variation to the look of the widget. The options affected notably include **-background** and **-relief**. Examples -------- This is the classic Tk “Hello, World!” demonstration: ``` **button** .b -text "Hello, World!" -command exit pack .b ``` This example demonstrates how to handle button accelerators: ``` **button** .b1 -text Hello -underline 0 **button** .b2 -text World -underline 0 bind . <Key-h> {.b1 flash; .b1 invoke} bind . <Key-w> {.b2 flash; .b2 invoke} pack .b1 .b2 ``` See also -------- **[ttk::button](ttk_button.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/button.htm>
programming_docs
tcl_tk ttk_progressbar ttk\_progressbar ================ [NAME](ttk_progressbar.htm#M2) ttk::progressbar — Provide progress feedback [SYNOPSIS](ttk_progressbar.htm#M3) [DESCRIPTION](ttk_progressbar.htm#M4) [STANDARD OPTIONS](ttk_progressbar.htm#M5) [-class, undefined, undefined](ttk_widget.htm#M-class) [-cursor, cursor, Cursor](ttk_widget.htm#M-cursor) [-style, style, Style](ttk_widget.htm#M-style) [-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus) [WIDGET-SPECIFIC OPTIONS](ttk_progressbar.htm#M6) [-orient, orient, Orient](ttk_progressbar.htm#M7) [-length, length, Length](ttk_progressbar.htm#M8) [-mode, mode, Mode](ttk_progressbar.htm#M9) [-maximum, maximum, Maximum](ttk_progressbar.htm#M10) [-value, value, Value](ttk_progressbar.htm#M11) [-variable, variable, Variable](ttk_progressbar.htm#M12) [-phase, phase, Phase](ttk_progressbar.htm#M13) [WIDGET COMMAND](ttk_progressbar.htm#M14) [*pathName* **cget** *option*](ttk_progressbar.htm#M15) [*pathName* **configure** ?*option*? ?*value option value ...*?](ttk_progressbar.htm#M16) [*pathName* **identify** *x y*](ttk_progressbar.htm#M17) [*pathName* **instate** *statespec* ?*script*?](ttk_progressbar.htm#M18) [*pathName* **start** ?*interval*?](ttk_progressbar.htm#M19) [*pathName* **state** ?*stateSpec*?](ttk_progressbar.htm#M20) [*pathName* **step** ?*amount*?](ttk_progressbar.htm#M21) [*pathName* **stop**](ttk_progressbar.htm#M22) [SEE ALSO](ttk_progressbar.htm#M23) Name ---- ttk::progressbar — Provide progress feedback Synopsis -------- **ttk::progressbar** *pathName* ?*options*? Description ----------- A **ttk::progressbar** widget shows the status of a long-running operation. They can operate in two modes: *determinate* mode shows the amount completed relative to the total amount of work to be done, and *indeterminate* mode provides an animated display to let the user know that something is happening. Standard options ---------------- **[-class, undefined, undefined](ttk_widget.htm#M-class)** **[-cursor, cursor, Cursor](ttk_widget.htm#M-cursor)** **[-style, style, Style](ttk_widget.htm#M-style)** **[-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus)** Widget-specific options ----------------------- Command-Line Name: **-orient** Database Name: **orient** Database Class: **Orient** One of **horizontal** or **vertical**. Specifies the orientation of the progress bar. Command-Line Name: **-length** Database Name: **length** Database Class: **Length** Specifies the length of the long axis of the progress bar (width if horizontal, height if vertical). Command-Line Name: **-mode** Database Name: **mode** Database Class: **Mode** One of **determinate** or **indeterminate**. Command-Line Name: **-maximum** Database Name: **maximum** Database Class: **Maximum** A floating point number specifying the maximum **-value**. Defaults to 100. Command-Line Name: **-value** Database Name: **value** Database Class: **Value** The current value of the progress bar. In *determinate* mode, this represents the amount of work completed. In *indeterminate* mode, it is interpreted modulo **-maximum**; that is, the progress bar completes one “cycle” when the **-value** increases by **-maximum**. Command-Line Name: **-variable** Database Name: **variable** Database Class: **Variable** The name of a global Tcl variable which is linked to the **-value**. If specified, the **-value** of the progress bar is automatically set to the value of the variable whenever the latter is modified. Command-Line Name: **-phase** Database Name: **phase** Database Class: **Phase** Read-only option. The widget periodically increments the value of this option whenever the **-value** is greater than 0 and, in *determinate* mode, less than **-maximum**. This option may be used by the current theme to provide additional animation effects. Widget command -------------- *pathName* **cget** *option* Returns the current value of the specified *option*; see *ttk::widget(n)*. *pathName* **configure** ?*option*? ?*value option value ...*? Modify or query widget options; see *ttk::widget(n)*. *pathName* **identify** *x y* Returns the name of the element at position *x*, *y*. See *ttk::widget(n)*. *pathName* **instate** *statespec* ?*script*? Test the widget state; see *ttk::widget(n)*. *pathName* **start** ?*interval*? Begin autoincrement mode: schedules a recurring timer event that calls **step** every *interval* milliseconds. If omitted, *interval* defaults to 50 milliseconds (20 steps/second). *pathName* **state** ?*stateSpec*? Modify or query the widget state; see *ttk::widget(n)*. *pathName* **step** ?*amount*? Increments the **-value** by *amount*. *amount* defaults to 1.0 if omitted. *pathName* **stop** Stop autoincrement mode: cancels any recurring timer event initiated by *pathName* **start**. See also -------- **[ttk::widget](ttk_widget.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/ttk_progressbar.htm> tcl_tk ttk_panedwindow ttk\_panedwindow ================ [NAME](ttk_panedwindow.htm#M2) ttk::panedwindow — Multi-pane container window [SYNOPSIS](ttk_panedwindow.htm#M3) [DESCRIPTION](ttk_panedwindow.htm#M4) [STANDARD OPTIONS](ttk_panedwindow.htm#M5) [-class, undefined, undefined](ttk_widget.htm#M-class) [-cursor, cursor, Cursor](ttk_widget.htm#M-cursor) [-style, style, Style](ttk_widget.htm#M-style) [-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus) [WIDGET-SPECIFIC OPTIONS](ttk_panedwindow.htm#M6) [-orient, orient, Orient](ttk_panedwindow.htm#M7) [-width, width, Width](ttk_panedwindow.htm#M8) [-height, height, Height](ttk_panedwindow.htm#M9) [PANE OPTIONS](ttk_panedwindow.htm#M10) [-weight, weight, Weight](ttk_panedwindow.htm#M11) [WIDGET COMMAND](ttk_panedwindow.htm#M12) [*pathname* **add** *subwindow options...*](ttk_panedwindow.htm#M13) [*pathname* **forget** *pane*](ttk_panedwindow.htm#M14) [*pathname* **identify** *component x y*](ttk_panedwindow.htm#M15) [*pathname* **identify element** *x y*](ttk_panedwindow.htm#M16) [*pathname* **identify sash** *x y*](ttk_panedwindow.htm#M17) [*pathname* **insert** *pos subwindow options...*](ttk_panedwindow.htm#M18) [*pathname* **pane** *pane -option* ?*value* ?*-option value...*](ttk_panedwindow.htm#M19) [*pathname* **panes**](ttk_panedwindow.htm#M20) [*pathname* **sashpos** *index* ?*newpos*?](ttk_panedwindow.htm#M21) [VIRTUAL EVENTS](ttk_panedwindow.htm#M22) [SEE ALSO](ttk_panedwindow.htm#M23) Name ---- ttk::panedwindow — Multi-pane container window Synopsis -------- **ttk::panedwindow** *pathname* ?*options*? *pathname* **add** *window* ?*options...*? *pathname* **insert** *index* *window* ?*options...*? Description ----------- A **ttk::panedwindow** widget displays a number of subwindows, stacked either vertically or horizontally. The user may adjust the relative sizes of the subwindows by dragging the sash between panes. Standard options ---------------- **[-class, undefined, undefined](ttk_widget.htm#M-class)** **[-cursor, cursor, Cursor](ttk_widget.htm#M-cursor)** **[-style, style, Style](ttk_widget.htm#M-style)** **[-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus)** Widget-specific options ----------------------- Command-Line Name: **-orient** Database Name: **orient** Database Class: **Orient** Specifies the orientation of the window. If **vertical**, subpanes are stacked top-to-bottom; if **horizontal**, subpanes are stacked left-to-right. Command-Line Name: **-width** Database Name: **width** Database Class: **Width** If present and greater than zero, specifies the desired width of the widget in pixels. Otherwise, the requested width is determined by the width of the managed windows. Command-Line Name: **-height** Database Name: **height** Database Class: **Height** If present and greater than zero, specifies the desired height of the widget in pixels. Otherwise, the requested height is determined by the height of the managed windows. Pane options ------------ The following options may be specified for each pane: Command-Line Name: **-weight** Database Name: **weight** Database Class: **Weight** An integer specifying the relative stretchability of the pane. When the paned window is resized, the extra space is added or subtracted to each pane proportionally to its **-weight**. Widget command -------------- Supports the standard **configure**, **cget**, **state**, and **instate** commands; see *ttk::widget(n)* for details. Additional commands: *pathname* **add** *subwindow options...* Adds a new pane to the window. See **[PANE OPTIONS](#M10)** for the list of available options. *pathname* **forget** *pane* Removes the specified subpane from the widget. *pane* is either an integer index or the name of a managed subwindow. *pathname* **identify** *component x y* Returns the name of the element under the point given by *x* and *y*, or the empty string if no component is present at that location. If *component* is omitted, it defaults to **sash**. The following subcommands are supported: *pathname* **identify element** *x y* Returns the name of the element at the specified location. *pathname* **identify sash** *x y* Returns the index of the sash at the specified location. *pathname* **insert** *pos subwindow options...* Inserts a pane at the specified position. *pos* is either the string **end**, an integer index, or the name of a managed subwindow. If *subwindow* is already managed by the paned window, moves it to the specified position. See **[PANE OPTIONS](#M10)** for the list of available options. *pathname* **pane** *pane -option* ?*value* ?*-option value...* Query or modify the options of the specified *pane*, where *pane* is either an integer index or the name of a managed subwindow. If no *-option* is specified, returns a dictionary of the pane option values. If one *-option* is specified, returns the value of that *option*. Otherwise, sets the *-option*s to the corresponding *value*s. *pathname* **panes** Returns the list of all windows managed by the widget. *pathname* **sashpos** *index* ?*newpos*? If *newpos* is specified, sets the position of sash number *index*. May adjust the positions of adjacent sashes to ensure that positions are monotonically increasing. Sash positions are further constrained to be between 0 and the total size of the widget. Returns the new position of sash number *index*. Virtual events -------------- The panedwindow widget generates an **<<EnteredChild>>** virtual event on LeaveNotify/NotifyInferior events, because Tk does not execute binding scripts for <Leave> events when the pointer crosses from a parent to a child. The panedwindow widget needs to know when that happens. See also -------- **[ttk::widget](ttk_widget.htm)**, **[ttk::notebook](ttk_notebook.htm)**, **[panedwindow](panedwindow.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/ttk_panedwindow.htm> tcl_tk popup popup ===== Name ---- tk\_popup — Post a popup menu Synopsis -------- **tk\_popup** *menu x y* ?*entry*? Description ----------- This procedure posts a menu at a given position on the screen and configures Tk so that the menu and its cascaded children can be traversed with the mouse or the keyboard. *Menu* is the name of a menu widget and *x* and *y* are the root coordinates at which to display the menu. If *entry* is omitted or an empty string, the menu's upper left corner is positioned at the given point. Otherwise *entry* gives the index of an entry in *menu* and the menu will be positioned so that the entry is positioned over the given point. Example ------- How to attach a simple popup menu to a widget. ``` # Create a menu set m [menu .popupMenu] $m add command -label "Example 1" -command bell $m add command -label "Example 2" -command bell # Create something to attach it to pack [label .l -text "Click me!"] # Arrange for the menu to pop up when the label is clicked bind .l <1> {**tk\_popup** .popupMenu %X %Y} ``` See also -------- **[bind](bind.htm)**, **[menu](menu.htm)**, **tk\_optionMenu** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/popup.htm> tcl_tk ttk_labelframe ttk\_labelframe =============== [NAME](ttk_labelframe.htm#M2) ttk::labelframe — Container widget with optional label [SYNOPSIS](ttk_labelframe.htm#M3) [DESCRIPTION](ttk_labelframe.htm#M4) [STANDARD OPTIONS](ttk_labelframe.htm#M5) [-class, undefined, undefined](ttk_widget.htm#M-class) [-cursor, cursor, Cursor](ttk_widget.htm#M-cursor) [-style, style, Style](ttk_widget.htm#M-style) [-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus) [WIDGET-SPECIFIC OPTIONS](ttk_labelframe.htm#M6) [-labelanchor, labelAnchor, LabelAnchor](ttk_labelframe.htm#M7) [-text, text, Text](ttk_labelframe.htm#M8) [-underline, underline, Underline](ttk_labelframe.htm#M9) [-padding, padding, Padding](ttk_labelframe.htm#M10) [-labelwidget, labelWidget, LabelWidget](ttk_labelframe.htm#M11) [-width, width, Width](ttk_labelframe.htm#M12) [-height, height, Height](ttk_labelframe.htm#M13) [WIDGET COMMAND](ttk_labelframe.htm#M14) [SEE ALSO](ttk_labelframe.htm#M15) [KEYWORDS](ttk_labelframe.htm#M16) Name ---- ttk::labelframe — Container widget with optional label Synopsis -------- **ttk::labelframe** *pathName* ?*options*? Description ----------- A **ttk::labelframe** widget is a container used to group other widgets together. It has an optional label, which may be a plain text string or another widget. Standard options ---------------- **[-class, undefined, undefined](ttk_widget.htm#M-class)** **[-cursor, cursor, Cursor](ttk_widget.htm#M-cursor)** **[-style, style, Style](ttk_widget.htm#M-style)** **[-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus)** Widget-specific options ----------------------- Command-Line Name: **-labelanchor** Database Name: **labelAnchor** Database Class: **LabelAnchor** Specifies where to place the label. Allowed values are (clockwise from the top upper left corner): **nw**, **n**, **ne**, **en**, **e**, **es**, **se**, **s**,**sw**, **ws**, **w** and **wn**. The default value is theme-dependent. Command-Line Name: **-text** Database Name: **text** Database Class: **Text** Specifies the text of the label. Command-Line Name: **-underline** Database Name: **underline** Database Class: **Underline** If set, specifies the integer index (0-based) of a character to underline in the text string. The underlined character is used for mnemonic activation. Mnemonic activation for a **ttk::labelframe** sets the keyboard focus to the first child of the **ttk::labelframe** widget. Command-Line Name: **-padding** Database Name: **padding** Database Class: **Padding** Additional padding to include inside the border. Command-Line Name: **-labelwidget** Database Name: **labelWidget** Database Class: **LabelWidget** The name of a widget to use for the label. If set, overrides the **-text** option. The **-labelwidget** must be a child of the **[labelframe](labelframe.htm)** widget or one of the **[labelframe](labelframe.htm)**'s ancestors, and must belong to the same top-level widget as the **[labelframe](labelframe.htm)**. Command-Line Name: **-width** Database Name: **width** Database Class: **Width** If specified, the widget's requested width in pixels. Command-Line Name: **-height** Database Name: **height** Database Class: **Height** If specified, the widget's requested height in pixels. (See *ttk::frame(n)* for further notes on **-width** and **-height**). Widget command -------------- Supports the standard widget commands **configure**, **cget**, **identify**, **instate**, and **state**; see *ttk::widget(n)*. See also -------- **[ttk::widget](ttk_widget.htm)**, **[ttk::frame](ttk_frame.htm)**, **[labelframe](labelframe.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/ttk_labelframe.htm> tcl_tk loadTk loadTk ====== Name ---- safe::loadTk — Load Tk into a safe interpreter. Synopsis -------- **safe::loadTk** *slave* ?**-use** *windowId*? ?**-display** *displayName*? Description ----------- Safe Tk is based on Safe Tcl, which provides a mechanism that allows restricted and mediated access to auto-loading and packages for safe interpreters. Safe Tk adds the ability to configure the interpreter for safe Tk operations and load Tk into safe interpreters. The **safe::loadTk** command initializes the required data structures in the named safe interpreter and then loads Tk into it. The interpreter must have been created with **safe::interpCreate** or have been initialized with **safe::interpInit**. The command returns the name of the safe interpreter. If **-use** is specified, the window identified by the specified system dependent identifier *windowId* is used to contain the “.” window of the safe interpreter; it can be any valid id, eventually referencing a window belonging to another application. As a convenience, if the window you plan to use is a Tk Window of the application you can use the window name (e.g., “**.x.y**”) instead of its window Id (e.g., from **[winfo id](winfo.htm)** **.x.y**). When **-use** is not specified, a new toplevel window is created for the “.” window of the safe interpreter. On X11 if you want the embedded window to use another display than the default one, specify it with **-display**. See the **[SECURITY ISSUES](#M5)** section below for implementation details. Security issues --------------- Please read the **[safe](../tclcmd/safe.htm)** manual page for Tcl to learn about the basic security considerations for Safe Tcl. **safe::loadTk** adds the value of **[tk\_library](tkvars.htm)** taken from the master interpreter to the virtual access path of the safe interpreter so that auto-loading will work in the safe interpreter. Tk initialization is now safe with respect to not trusting the slave's state for startup. **safe::loadTk** registers the slave's name so when the Tk initialization (**[Tk\_SafeInit](https://www.tcl.tk/man/tcl/TkLib/Tk_Init.htm)**) is called and in turn calls the master's **safe::InitTk** it will return the desired **[argv](../tclcmd/tclvars.htm)** equivalent (**-use** *windowId*, correct **-display**, etc.) When **-use** is not used, the new toplevel created is specially decorated so the user is always aware that the user interface presented comes from a potentially unsafe code and can easily delete the corresponding interpreter. On X11, conflicting **-use** and **-display** are likely to generate a fatal X error. See also -------- **[safe](../tclcmd/safe.htm)**, **[interp](../tclcmd/interp.htm)**, **[library](../tclcmd/library.htm)**, **[load](../tclcmd/load.htm)**, **[package](../tclcmd/package.htm)**, **[source](../tclcmd/source.htm)**, **[unknown](../tclcmd/unknown.htm)** [safe interpreter](#), Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/loadTk.htm> tcl_tk ttk_treeview ttk\_treeview ============= [NAME](ttk_treeview.htm#M2) ttk::treeview — hierarchical multicolumn data display widget [SYNOPSIS](ttk_treeview.htm#M3) [DESCRIPTION](ttk_treeview.htm#M4) [STANDARD OPTIONS](ttk_treeview.htm#M5) [-class, undefined, undefined](ttk_widget.htm#M-class) [-cursor, cursor, Cursor](ttk_widget.htm#M-cursor) [-style, style, Style](ttk_widget.htm#M-style) [-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus) [-xscrollcommand, xScrollCommand, ScrollCommand](ttk_widget.htm#M-xscrollcommand) [-yscrollcommand, yScrollCommand, ScrollCommand](ttk_widget.htm#M-yscrollcommand) [WIDGET-SPECIFIC OPTIONS](ttk_treeview.htm#M6) [-columns, columns, Columns](ttk_treeview.htm#M7) [-displaycolumns, displayColumns, DisplayColumns](ttk_treeview.htm#M8) [-height, height, Height](ttk_treeview.htm#M9) [-padding, padding, Padding](ttk_treeview.htm#M10) [-selectmode, selectMode, SelectMode](ttk_treeview.htm#M11) [-show, show, Show](ttk_treeview.htm#M12) [**tree**](ttk_treeview.htm#M13) [**headings**](ttk_treeview.htm#M14) [WIDGET COMMAND](ttk_treeview.htm#M15) [*pathname* **bbox** *item* ?*column*?](ttk_treeview.htm#M16) [*pathname* **cget** *option*](ttk_treeview.htm#M17) [*pathname* **children** *item* ?*newchildren*?](ttk_treeview.htm#M18) [*pathname* **column** *column* ?*-option* ?*value -option value...*?](ttk_treeview.htm#M19) [**-id** *name*](ttk_treeview.htm#M20) [**-anchor**](ttk_treeview.htm#M21) [**-minwidth**](ttk_treeview.htm#M22) [**-stretch**](ttk_treeview.htm#M23) [**-width** *w*](ttk_treeview.htm#M24) [*pathname* **configure** ?*option*? ?*value option value ...*?](ttk_treeview.htm#M25) [*pathname* **delete** *itemList*](ttk_treeview.htm#M26) [*pathname* **detach** *itemList*](ttk_treeview.htm#M27) [*pathname* **exists** *item*](ttk_treeview.htm#M28) [*pathname* **focus** ?*item*?](ttk_treeview.htm#M29) [*pathname* **heading** *column* ?*-option* ?*value -option value...*?](ttk_treeview.htm#M30) [**-text** *text*](ttk_treeview.htm#M31) [**-image** *imageName*](ttk_treeview.htm#M32) [**-anchor** *anchor*](ttk_treeview.htm#M33) [**-command** *script*](ttk_treeview.htm#M34) [*pathname* **identify** *component x y*](ttk_treeview.htm#M35) [*pathname* **identify region** *x y*](ttk_treeview.htm#M36) [heading](ttk_treeview.htm#M37) [separator](ttk_treeview.htm#M38) [tree](ttk_treeview.htm#M39) [cell](ttk_treeview.htm#M40) [*pathname* **identify column** *x y*](ttk_treeview.htm#M41) [*pathname* **identify element** *x y*](ttk_treeview.htm#M42) [*pathname* **identify row** *x y*](ttk_treeview.htm#M43) [*pathname* **index** *item*](ttk_treeview.htm#M44) [*pathname* **insert** *parent index* ?**-id** *id*? *options...*](ttk_treeview.htm#M45) [*pathname* **instate** *statespec* ?*script*?](ttk_treeview.htm#M46) [*pathname* **item** *item* ?*-option* ?*value -option value...*?](ttk_treeview.htm#M47) [*pathname* **move** *item parent index*](ttk_treeview.htm#M48) [*pathname* **next** *item*](ttk_treeview.htm#M49) [*pathname* **parent** *item*](ttk_treeview.htm#M50) [*pathname* **prev** *item*](ttk_treeview.htm#M51) [*pathname* **see** *item*](ttk_treeview.htm#M52) [*pathname* **selection** ?*selop itemList*?](ttk_treeview.htm#M53) [*pathname* **selection set** *itemList*](ttk_treeview.htm#M54) [*pathname* **selection add** *itemList*](ttk_treeview.htm#M55) [*pathname* **selection remove** *itemList*](ttk_treeview.htm#M56) [*pathname* **selection toggle** *itemList*](ttk_treeview.htm#M57) [*pathname* **set** *item* ?*column*? ?*value*?](ttk_treeview.htm#M58) [*pathname* **state** ?*stateSpec*?](ttk_treeview.htm#M59) [*pathName* **tag** *args...*](ttk_treeview.htm#M60) [*pathName* **tag bind** *tagName* ?*sequence*? ?*script*?](ttk_treeview.htm#M61) [*pathName* **tag configure** *tagName* ?*option*? ?*value option value...*?](ttk_treeview.htm#M62) [*pathName* **tag has** *tagName* ?*item*?](ttk_treeview.htm#M63) [*pathName* **tag names**](ttk_treeview.htm#M64) [*pathName* **tag add** *tag items*](ttk_treeview.htm#M65) [*pathName* **tag remove** *tag* ?*items*?](ttk_treeview.htm#M66) [*pathName* **xview** *args*](ttk_treeview.htm#M67) [*pathName* **yview** *args*](ttk_treeview.htm#M68) [ITEM OPTIONS](ttk_treeview.htm#M69) [-text, text, Text](ttk_treeview.htm#M70) [-image, image, Image](ttk_treeview.htm#M71) [-values, values, Values](ttk_treeview.htm#M72) [-open, open, Open](ttk_treeview.htm#M73) [-tags, tags, Tags](ttk_treeview.htm#M74) [TAG OPTIONS](ttk_treeview.htm#M75) [**-foreground**](ttk_treeview.htm#M76) [**-background**](ttk_treeview.htm#M77) [**-font**](ttk_treeview.htm#M78) [**-image**](ttk_treeview.htm#M79) [COLUMN IDENTIFIERS](ttk_treeview.htm#M80) [VIRTUAL EVENTS](ttk_treeview.htm#M81) [<<TreeviewSelect>>](ttk_treeview.htm#M82) [<<TreeviewOpen>>](ttk_treeview.htm#M83) [<<TreeviewClose>>](ttk_treeview.htm#M84) [SEE ALSO](ttk_treeview.htm#M85) Name ---- ttk::treeview — hierarchical multicolumn data display widget Synopsis -------- **ttk::treeview** *pathname* ?*options*? Description ----------- The **ttk::treeview** widget displays a hierarchical collection of items. Each item has a textual label, an optional image, and an optional list of data values. The data values are displayed in successive columns after the tree label. The order in which data values are displayed may be controlled by setting the **-displaycolumns** widget option. The tree widget can also display column headings. Columns may be accessed by number or by symbolic names listed in the **-columns** widget option; see **[COLUMN IDENTIFIERS](#M80)**. Each item is identified by a unique name. The widget will generate item IDs if they are not supplied by the caller. There is a distinguished root item, named **{}**. The root item itself is not displayed; its children appear at the top level of the hierarchy. Each item also has a list of *tags*, which can be used to associate event bindings with individual items and control the appearance of the item. Treeview widgets support horizontal and vertical scrolling with the standard **-**[**xy**]**scrollcommand** options and [**xy**]**view** widget commands. Standard options ---------------- **[-class, undefined, undefined](ttk_widget.htm#M-class)** **[-cursor, cursor, Cursor](ttk_widget.htm#M-cursor)** **[-style, style, Style](ttk_widget.htm#M-style)** **[-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus)** **[-xscrollcommand, xScrollCommand, ScrollCommand](ttk_widget.htm#M-xscrollcommand)** **[-yscrollcommand, yScrollCommand, ScrollCommand](ttk_widget.htm#M-yscrollcommand)** Widget-specific options ----------------------- Command-Line Name: **-columns** Database Name: **columns** Database Class: **Columns** A list of column identifiers, specifying the number of columns and their names. Command-Line Name: **-displaycolumns** Database Name: **displayColumns** Database Class: **DisplayColumns** A list of column identifiers (either symbolic names or integer indices) specifying which data columns are displayed and the order in which they appear, or the string **#all**. If set to **#all** (the default), all columns are shown in the order given. Command-Line Name: **-height** Database Name: **height** Database Class: **Height** Specifies the number of rows which should be visible. Note: the requested width is determined from the sum of the column widths. Command-Line Name: **-padding** Database Name: **padding** Database Class: **Padding** Specifies the internal padding for the widget. The padding is a list of up to four length specifications; see **[Ttk\_GetPaddingFromObj()](https://www.tcl.tk/man/tcl/TkLib/ttk_Geometry.htm)** for details. Command-Line Name: **-selectmode** Database Name: **selectMode** Database Class: **SelectMode** Controls how the built-in class bindings manage the selection. One of **extended**, **browse**, or **none**. If set to **extended** (the default), multiple items may be selected. If **browse**, only a single item will be selected at a time. If **none**, the selection will not be changed. Note that application code and tag bindings can set the selection however they wish, regardless of the value of **-selectmode**. Command-Line Name: **-show** Database Name: **show** Database Class: **Show** A list containing zero or more of the following values, specifying which elements of the tree to display. **tree** Display tree labels in column #0. **headings** Display the heading row. The default is **tree headings**, i.e., show all elements. **NOTE:** Column #0 always refers to the tree column, even if **-show tree** is not specified. Widget command -------------- *pathname* **bbox** *item* ?*column*? Returns the bounding box (relative to the treeview widget's window) of the specified *item* in the form *x y width height*. If *column* is specified, returns the bounding box of that cell. If the *item* is not visible (i.e., if it is a descendant of a closed item or is scrolled offscreen), returns the empty list. *pathname* **cget** *option* Returns the current value of the specified *option*; see *ttk::widget(n)*. *pathname* **children** *item* ?*newchildren*? If *newchildren* is not specified, returns the list of children belonging to *item*. If *newchildren* is specified, replaces *item*'s child list with *newchildren*. Items in the old child list not present in the new child list are detached from the tree. None of the items in *newchildren* may be an ancestor of *item*. *pathname* **column** *column* ?*-option* ?*value -option value...*? Query or modify the options for the specified *column*. If no *-option* is specified, returns a dictionary of option/value pairs. If a single *-option* is specified, returns the value of that option. Otherwise, the options are updated with the specified values. The following options may be set on each column: **-id** *name* The column name. This is a read-only option. For example, [*$pathname* **column #***n* **-id**] returns the data column associated with display column #*n*. **-anchor** Specifies how the text in this column should be aligned with respect to the cell. One of **n**, **ne**, **e**, **se**, **s**, **sw**, **w**, **nw**, or **center**. **-minwidth** The minimum width of the column in pixels. The treeview widget will not make the column any smaller than **-minwidth** when the widget is resized or the user drags a column separator. **-stretch** Specifies whether or not the column's width should be adjusted when the widget is resized. **-width** *w* The width of the column in pixels. Default is something reasonable, probably 200 or so. Use *pathname column #0* to configure the tree column. *pathname* **configure** ?*option*? ?*value option value ...*? Modify or query widget options; see *ttk::widget(n)*. *pathname* **delete** *itemList* Deletes each of the items in *itemList* and all of their descendants. The root item may not be deleted. See also: **detach**. *pathname* **detach** *itemList* Unlinks all of the specified items in *itemList* from the tree. The items and all of their descendants are still present and may be reinserted at another point in the tree with the **move** operation, but will not be displayed until that is done. The root item may not be detached. See also: **delete**. *pathname* **exists** *item* Returns 1 if the specified *item* is present in the tree, 0 otherwise. *pathname* **focus** ?*item*? If *item* is specified, sets the focus item to *item*. Otherwise, returns the current focus item, or **{}** if there is none. *pathname* **heading** *column* ?*-option* ?*value -option value...*? Query or modify the heading options for the specified *column*. Valid options are: **-text** *text* The text to display in the column heading. **-image** *imageName* Specifies an image to display to the right of the column heading. **-anchor** *anchor* Specifies how the heading text should be aligned. One of the standard Tk anchor values. **-command** *script* A script to evaluate when the heading label is pressed. Use *pathname heading #0* to configure the tree column heading. *pathname* **identify** *component x y* Returns a description of the specified *component* under the point given by *x* and *y*, or the empty string if no such *component* is present at that position. The following subcommands are supported: *pathname* **identify region** *x y* Returns one of: heading Tree heading area; use [**pathname identify column** *x y*] to determine the heading number. separator Space between two column headings; [**pathname identify column** *x y*] will return the display column identifier of the heading to left of the separator. tree The tree area. cell A data cell. *pathname* **identify item** *x y* Returns the item ID of the item at position *y*. *pathname* **identify column** *x y* Returns the data column identifier of the cell at position *x*. The tree column has ID **#0**. *pathname* **identify element** *x y* The element at position *x,y*. *pathname* **identify row** *x y* Obsolescent synonym for *pathname* **identify item**. See **[COLUMN IDENTIFIERS](#M80)** for a discussion of display columns and data columns. *pathname* **index** *item* Returns the integer index of *item* within its parent's list of children. *pathname* **insert** *parent index* ?**-id** *id*? *options...* Creates a new item. *parent* is the item ID of the parent item, or the empty string **{}** to create a new top-level item. *index* is an integer, or the value **end**, specifying where in the list of *parent*'s children to insert the new item. If *index* is less than or equal to zero, the new node is inserted at the beginning; if *index* is greater than or equal to the current number of children, it is inserted at the end. If **-id** is specified, it is used as the item identifier; *id* must not already exist in the tree. Otherwise, a new unique identifier is generated. *pathname* **insert** returns the item identifier of the newly created item. See **[ITEM OPTIONS](#M69)** for the list of available options. *pathname* **instate** *statespec* ?*script*? Test the widget state; see *ttk::widget(n)*. *pathname* **item** *item* ?*-option* ?*value -option value...*? Query or modify the options for the specified *item*. If no *-option* is specified, returns a dictionary of option/value pairs. If a single *-option* is specified, returns the value of that option. Otherwise, the item's options are updated with the specified values. See **[ITEM OPTIONS](#M69)** for the list of available options. *pathname* **move** *item parent index* Moves *item* to position *index* in *parent*'s list of children. It is illegal to move an item under one of its descendants. If *index* is less than or equal to zero, *item* is moved to the beginning; if greater than or equal to the number of children, it is moved to the end. *pathname* **next** *item* Returns the identifier of *item*'s next sibling, or **{}** if *item* is the last child of its parent. *pathname* **parent** *item* Returns the ID of the parent of *item*, or **{}** if *item* is at the top level of the hierarchy. *pathname* **prev** *item* Returns the identifier of *item*'s previous sibling, or **{}** if *item* is the first child of its parent. *pathname* **see** *item* Ensure that *item* is visible: sets all of *item*'s ancestors to **-open true**, and scrolls the widget if necessary so that *item* is within the visible portion of the tree. *pathname* **selection** ?*selop itemList*? If *selop* is not specified, returns the list of selected items. Otherwise, *selop* is one of the following: *pathname* **selection set** *itemList* *itemList* becomes the new selection. *pathname* **selection add** *itemList* Add *itemList* to the selection *pathname* **selection remove** *itemList* Remove *itemList* from the selection *pathname* **selection toggle** *itemList* Toggle the selection state of each item in *itemList*. *pathname* **set** *item* ?*column*? ?*value*? With one argument, returns a dictionary of column/value pairs for the specified *item*. With two arguments, returns the current value of the specified *column*. With three arguments, sets the value of column *column* in item *item* to the specified *value*. See also **[COLUMN IDENTIFIERS](#M80)**. *pathname* **state** ?*stateSpec*? Modify or query the widget state; see *ttk::widget(n)*. *pathName* **tag** *args...* *pathName* **tag bind** *tagName* ?*sequence*? ?*script*? Add a Tk binding script for the event sequence *sequence* to the tag *tagName*. When an X event is delivered to an item, binding scripts for each of the item's **-tags** are evaluated in order as per *bindtags(n)*. **<KeyPress>**, **<KeyRelease>**, and virtual events are sent to the focus item. **<ButtonPress>**, **<ButtonRelease>**, and **<Motion>** events are sent to the item under the mouse pointer. No other event types are supported. The binding *script* undergoes **%**-substitutions before evaluation; see **bind(n)** for details. *pathName* **tag configure** *tagName* ?*option*? ?*value option value...*? Query or modify the options for the specified *tagName*. If one or more *option/value* pairs are specified, sets the value of those options for the specified tag. If a single *option* is specified, returns the value of that option (or the empty string if the option has not been specified for *tagName*). With no additional arguments, returns a dictionary of the option settings for *tagName*. See **[TAG OPTIONS](#M75)** for the list of available options. *pathName* **tag has** *tagName* ?*item*? If *item* is specified, returns 1 or 0 depending on whether the specified item has the named tag. Otherwise, returns a list of all items which have the specified tag. *pathName* **tag names** Returns a list of all tags used by the widget. *pathName* **tag add** *tag items* Adds the specified *tag* to each of the listed *items*. If *tag* is already present for a particular item, then the **-tags** for that item are unchanged. *pathName* **tag remove** *tag* ?*items*? Removes the specified *tag* from each of the listed *items*. If *items* is omitted, removes *tag* from each item in the tree. If *tag* is not present for a particular item, then the **-tags** for that item are unchanged. *pathName* **xview** *args* Standard command for horizontal scrolling; see *widget(n)*. *pathName* **yview** *args* Standard command for vertical scrolling; see *ttk::widget(n)*. Item options ------------ The following item options may be specified for items in the **insert** and **item** widget commands. Command-Line Name: **-text** Database Name: **text** Database Class: **Text** The textual label to display for the item. Command-Line Name: **-image** Database Name: **[image](image.htm)** Database Class: **[Image](image.htm)** A Tk image, displayed to the left of the label. Command-Line Name: **-values** Database Name: **values** Database Class: **Values** The list of values associated with the item. Each item should have the same number of values as the **-columns** widget option. If there are fewer values than columns, the remaining values are assumed empty. If there are more values than columns, the extra values are ignored. Command-Line Name: **-open** Database Name: **open** Database Class: **Open** A boolean value indicating whether the item's children should be displayed (**-open true**) or hidden (**-open false**). Command-Line Name: **-tags** Database Name: **tags** Database Class: **Tags** A list of tags associated with this item. Tag options ----------- The following options may be specified on tags: **-foreground** Specifies the text foreground color. **-background** Specifies the cell or item background color. **-font** Specifies the font to use when drawing text. **-image** Specifies the item image, in case the item's **-image** option is empty. Column identifiers ------------------ Column identifiers take any of the following forms: * A symbolic name from the list of **-columns**. * An integer *n*, specifying the *n*th data column. * A string of the form **#***n*, where *n* is an integer, specifying the *n*th display column. **NOTE:** Item **-values** may be displayed in a different order than the order in which they are stored. **NOTE:** Column #0 always refers to the tree column, even if **-show tree** is not specified. A *data column number* is an index into an item's **-values** list; a *display column number* is the column number in the tree where the values are displayed. Tree labels are displayed in column #0. If **-displaycolumns** is not set, then data column *n* is displayed in display column **#***n+1*. Again, **column #0 always refers to the tree column**. Virtual events -------------- The treeview widget generates the following virtual events. <<TreeviewSelect>> Generated whenever the selection changes. <<TreeviewOpen>> Generated just before setting the focus item to **-open true**. <<TreeviewClose>> Generated just after setting the focus item to **-open false**. The **[focus](focus.htm)** and **[selection](selection.htm)** widget commands can be used to determine the affected item or items. See also -------- **[ttk::widget](ttk_widget.htm)**, **[listbox](listbox.htm)**, **[image](image.htm)**, **[bind](bind.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/ttk_treeview.htm>
programming_docs
tcl_tk send send ==== [NAME](send.htm#M2) send — Execute a command in a different application [SYNOPSIS](send.htm#M3) [DESCRIPTION](send.htm#M4) [**-async**](send.htm#M5) [**-displayof** *pathName*](send.htm#M6) [**--**](send.htm#M7) [APPLICATION NAMES](send.htm#M8) [DISABLING SENDS](send.htm#M9) [SECURITY](send.htm#M10) [EXAMPLE](send.htm#M11) [KEYWORDS](send.htm#M12) Name ---- send — Execute a command in a different application Synopsis -------- **send ?***options*? *app cmd* ?*arg arg ...*? Description ----------- This command arranges for *cmd* (and *arg*s) to be executed in the application named by *app*. It returns the result or error from that command execution. *App* may be the name of any application whose main window is on the display containing the sender's main window; it need not be within the same process. If no *arg* arguments are present, then the command to be executed is contained entirely within the *cmd* argument. If one or more *arg*s are present, they are concatenated to form the command to be executed, just as for the **[eval](../tclcmd/eval.htm)** command. If the initial arguments of the command begin with “-” they are treated as options. The following options are currently defined: **-async** Requests asynchronous invocation. In this case the **send** command will complete immediately without waiting for *cmd* to complete in the target application; no result will be available and errors in the sent command will be ignored. If the target application is in the same process as the sending application then the **-async** option is ignored. **-displayof** *pathName* Specifies that the target application's main window is on the display of the window given by *pathName*, instead of the display containing the application's main window. **--** Serves no purpose except to terminate the list of options. This option is needed only if *app* could contain a leading “-” character. Application names ----------------- The name of an application is set initially from the name of the program or script that created the application. You can query and change the name of an application with the **[tk appname](tk.htm)** command. Disabling sends --------------- If the **send** command is removed from an application (e.g. with the command **[rename](../tclcmd/rename.htm)** **send {}**) then the application will not respond to incoming send requests anymore, nor will it be able to issue outgoing requests. Communication can be reenabled by invoking the **[tk appname](tk.htm)** command. Security -------- The **send** command is potentially a serious security loophole. On Unix, any application that can connect to your X server can send scripts to your applications. These incoming scripts can use Tcl to read and write your files and invoke subprocesses under your name. Host-based access control such as that provided by **xhost** is particularly insecure, since it allows anyone with an account on particular hosts to connect to your server, and if disabled it allows anyone anywhere to connect to your server. In order to provide at least a small amount of security, Tk checks the access control being used by the server and rejects incoming sends unless (a) **xhost**-style access control is enabled (i.e. only certain hosts can establish connections) and (b) the list of enabled hosts is empty. This means that applications cannot connect to your server unless they use some other form of authorization such as that provide by **xauth**. Under Windows, **send** is currently disabled. Most of the functionality is provided by the **[dde](../tclcmd/dde.htm)** command instead. Example ------- This script fragment can be used to make an application that only runs once on a particular display. ``` if {[tk appname FoobarApp] ne "FoobarApp"} { **send** -async FoobarApp RemoteStart $argv exit } # The command that will be called remotely, which raises # the application main window and opens the requested files proc RemoteStart args { raise . foreach filename $args { OpenFile $filename } } ``` Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/send.htm> tcl_tk ttk_spinbox ttk\_spinbox ============ [NAME](ttk_spinbox.htm#M2) ttk::spinbox — Selecting text field widget [SYNOPSIS](ttk_spinbox.htm#M3) [DESCRIPTION](ttk_spinbox.htm#M4) [STANDARD OPTIONS](ttk_spinbox.htm#M5) [-class, undefined, undefined](ttk_widget.htm#M-class) [-cursor, cursor, Cursor](ttk_widget.htm#M-cursor) [-style, style, Style](ttk_widget.htm#M-style) [-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus) [-validate, validate, Validate](ttk_entry.htm#M-validate) [-validatecommand, validateCommand, ValidateCommand](ttk_entry.htm#M-validatecommand) [-xscrollcommand, xScrollCommand, ScrollCommand](ttk_widget.htm#M-xscrollcommand) [WIDGET-SPECIFIC OPTIONS](ttk_spinbox.htm#M6) [-from, from, From](ttk_spinbox.htm#M7) [-to, to, To](ttk_spinbox.htm#M8) [-increment, increment, Increment](ttk_spinbox.htm#M9) [-values, values, Values](ttk_spinbox.htm#M10) [-wrap, wrap, Wrap](ttk_spinbox.htm#M11) [-format, format, Format](ttk_spinbox.htm#M12) [-command, command, Command](ttk_spinbox.htm#M13) [INDICES](ttk_spinbox.htm#M14) [VALIDATION](ttk_spinbox.htm#M15) [WIDGET COMMAND](ttk_spinbox.htm#M16) [*pathName* **current** *index*](ttk_spinbox.htm#M17) [*pathName* **get**](ttk_spinbox.htm#M18) [*pathName* **set** *value*](ttk_spinbox.htm#M19) [VIRTUAL EVENTS](ttk_spinbox.htm#M20) [SEE ALSO](ttk_spinbox.htm#M21) [KEYWORDS](ttk_spinbox.htm#M22) Name ---- ttk::spinbox — Selecting text field widget Synopsis -------- **ttk::spinbox** *pathName* ?*options*? Description ----------- A **ttk::spinbox** widget is a **[ttk::entry](ttk_entry.htm)** widget with built-in up and down buttons that are used to either modify a numeric value or to select among a set of values. The widget implements all the features of the **[ttk::entry](ttk_entry.htm)** widget including support of the **-textvariable** option to link the value displayed by the widget to a Tcl variable. Standard options ---------------- **[-class, undefined, undefined](ttk_widget.htm#M-class)** **[-cursor, cursor, Cursor](ttk_widget.htm#M-cursor)** **[-style, style, Style](ttk_widget.htm#M-style)** **[-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus)** **[-validate, validate, Validate](ttk_entry.htm#M-validate)** **[-validatecommand, validateCommand, ValidateCommand](ttk_entry.htm#M-validatecommand)** **[-xscrollcommand, xScrollCommand, ScrollCommand](ttk_widget.htm#M-xscrollcommand)** Widget-specific options ----------------------- Command-Line Name: **-from** Database Name: **from** Database Class: **From** A floating-point value specifying the lowest value for the spinbox. This is used in conjunction with **-to** and **-increment** to set a numerical range. Command-Line Name: **-to** Database Name: **to** Database Class: **To** A floating-point value specifying the highest permissible value for the widget. See also **-from** and **-increment**. range. Command-Line Name: **-increment** Database Name: **increment** Database Class: **Increment** A floating-point value specifying the change in value to be applied each time one of the widget spin buttons is pressed. The up button applies a positive increment, the down button applies a negative increment. Command-Line Name: **-values** Database Name: **values** Database Class: **Values** This must be a Tcl list of values. If this option is set then this will override any range set using the **-from**, **-to** and **-increment** options. The widget will instead use the values specified beginning with the first value. Command-Line Name: **-wrap** Database Name: **wrap** Database Class: **Wrap** Must be a proper boolean value. If on, the spinbox will wrap around the values of data in the widget. Command-Line Name: **-format** Database Name: **format** Database Class: **Format** Specifies an alternate format to use when setting the string value when using the **-from** and **-to** range. This must be a format specifier of the form **%<pad>.<pad>f**, as it will format a floating-point number. Command-Line Name: **-command** Database Name: **command** Database Class: **Command** Specifies a Tcl command to be invoked whenever a spinbutton is invoked. Indices ------- See the **[ttk::entry](ttk_entry.htm)** manual for information about indexing characters. Validation ---------- See the **[ttk::entry](ttk_entry.htm)** manual for information about using the **-validate** and **-validatecommand** options. Widget command -------------- The following subcommands are possible for spinbox widgets in addition to the commands described for the **[ttk::entry](ttk_entry.htm)** widget: *pathName* **current** *index* *pathName* **get** Returns the spinbox's current value. *pathName* **set** *value* Set the spinbox string to *value*. If a **-format** option has been configured then this format will be applied. If formatting fails or is not set or the **-values** option has been used then the value is set directly. Virtual events -------------- The spinbox widget generates a **<<Increment>>** virtual event when the user presses <Up>, and a **<<Decrement>>** virtual event when the user presses <Down>. See also -------- **[ttk::widget](ttk_widget.htm)**, **[ttk::entry](ttk_entry.htm)**, **[spinbox](spinbox.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/ttk_spinbox.htm> tcl_tk messageBox messageBox ========== [NAME](messagebox.htm#M2) tk\_messageBox — pops up a message window and waits for user response. [SYNOPSIS](messagebox.htm#M3) [DESCRIPTION](messagebox.htm#M4) [**-default** *name*](messagebox.htm#M5) [**-detail** *string*](messagebox.htm#M6) [**-icon** *iconImage*](messagebox.htm#M7) [**-message** *string*](messagebox.htm#M8) [**-parent** *window*](messagebox.htm#M9) [**-title** *titleString*](messagebox.htm#M10) [**-type** *predefinedType*](messagebox.htm#M11) [**abortretryignore**](messagebox.htm#M12) [**ok**](messagebox.htm#M13) [**okcancel**](messagebox.htm#M14) [**retrycancel**](messagebox.htm#M15) [**yesno**](messagebox.htm#M16) [**yesnocancel**](messagebox.htm#M17) [EXAMPLE](messagebox.htm#M18) [KEYWORDS](messagebox.htm#M19) Name ---- tk\_messageBox — pops up a message window and waits for user response. Synopsis -------- **tk\_messageBox** ?*option value ...*? Description ----------- This procedure creates and displays a message window with an application-specified message, an icon and a set of buttons. Each of the buttons in the message window is identified by a unique symbolic name (see the **-type** options). After the message window is popped up, **tk\_messageBox** waits for the user to select one of the buttons. Then it returns the symbolic name of the selected button. The following option-value pairs are supported: **-default** *name* *Name* gives the symbolic name of the default button for this message window ( “ok”, “cancel”, and so on). See **-type** for a list of the symbolic names. If this option is not specified, the first button in the dialog will be made the default. **-detail** *string* Specifies an auxiliary message to the main message given by the **-message** option. The message detail will be presented beneath the main message and, where supported by the OS, in a less emphasized font than the main message. **-icon** *iconImage* Specifies an icon to display. *IconImage* must be one of the following: **error**, **info**, **question** or **warning**. If this option is not specified, then the info icon will be displayed. **-message** *string* Specifies the message to display in this message box. The default value is an empty string. **-parent** *window* Makes *window* the logical parent of the message box. The message box is displayed on top of its parent window. **-title** *titleString* Specifies a string to display as the title of the message box. This option is ignored on Mac OS X, where platform guidelines forbid the use of a title on this kind of dialog. **-type** *predefinedType* Arranges for a predefined set of buttons to be displayed. The following values are possible for *predefinedType*: **abortretryignore** Displays three buttons whose symbolic names are **abort**, **retry** and **ignore**. **ok** Displays one button whose symbolic name is **ok**. **okcancel** Displays two buttons whose symbolic names are **ok** and **cancel**. **retrycancel** Displays two buttons whose symbolic names are **retry** and **cancel**. **yesno** Displays two buttons whose symbolic names are **yes** and **no**. **yesnocancel** Displays three buttons whose symbolic names are **yes**, **no** and **cancel**. Example ------- ``` set answer [**tk\_messageBox** -message "Really quit?" \ -icon question -type yesno \ -detail "Select \"Yes\" to make the application exit"] switch -- $answer { yes exit no {**tk\_messageBox** -message "I know you like this application!" \ -type ok} } ``` Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/messageBox.htm> tcl_tk dialog dialog ====== [NAME](dialog.htm#M2) tk\_dialog — Create modal dialog and wait for response [SYNOPSIS](dialog.htm#M3) [DESCRIPTION](dialog.htm#M4) [*window*](dialog.htm#M5) [*title*](dialog.htm#M6) [*text*](dialog.htm#M7) [*bitmap*](dialog.htm#M8) [*default*](dialog.htm#M9) [*string*](dialog.htm#M10) [EXAMPLE](dialog.htm#M11) [SEE ALSO](dialog.htm#M12) [KEYWORDS](dialog.htm#M13) Name ---- tk\_dialog — Create modal dialog and wait for response Synopsis -------- **tk\_dialog** *window title text bitmap default string string ...* Description ----------- This procedure is part of the Tk script library. It is largely *deprecated* by the **tk\_messageBox**. Its arguments describe a dialog box: *window* Name of top-level window to use for dialog. Any existing window by this name is destroyed. *title* Text to appear in the window manager's title bar for the dialog. *text* Message to appear in the top portion of the dialog box. *bitmap* If non-empty, specifies a bitmap (in a form suitable for [Tk\_GetBitmap](https://www.tcl.tk/man/tcl/TkLib/GetBitmap.htm)) to display in the top portion of the dialog, to the left of the text. If this is an empty string then no bitmap is displayed in the dialog. *default* If this is an integer greater than or equal to zero, then it gives the index of the button that is to be the default button for the dialog (0 for the leftmost button, and so on). If less than zero or an empty string then there will not be any default button. *string* There will be one button for each of these arguments. Each *string* specifies text to display in a button, in order from left to right. After creating a dialog box, **tk\_dialog** waits for the user to select one of the buttons either by clicking on the button with the mouse or by typing return to invoke the default button (if any). Then it returns the index of the selected button: 0 for the leftmost button, 1 for the button next to it, and so on. If the dialog's window is destroyed before the user selects one of the buttons, then -1 is returned. While waiting for the user to respond, **tk\_dialog** sets a local grab. This prevents the user from interacting with the application in any way except to invoke the dialog box. Example ------- ``` set reply [**tk\_dialog** .foo "The Title" "Do you want to say yes?" \ questhead 0 Yes No "I'm not sure"] ``` See also -------- **tk\_messageBox** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/dialog.htm> tcl_tk menubutton menubutton ========== [NAME](menubutton.htm#M2) menubutton — Create and manipulate 'menubutton' pop-up menu indicator widgets [SYNOPSIS](menubutton.htm#M3) [STANDARD OPTIONS](menubutton.htm#M4) [-activebackground, activeBackground, Foreground](options.htm#M-activebackground) [-activeforeground, activeForeground, Background](options.htm#M-activeforeground) [-anchor, anchor, Anchor](options.htm#M-anchor) [-background or -bg, background, Background](options.htm#M-background) [-bitmap, bitmap, Bitmap](options.htm#M-bitmap) [-borderwidth or -bd, borderWidth, BorderWidth](options.htm#M-borderwidth) [-compound, compound, Compound](options.htm#M-compound) [-cursor, cursor, Cursor](options.htm#M-cursor) [-disabledforeground, disabledForeground, DisabledForeground](options.htm#M-disabledforeground) [-font, font, Font](options.htm#M-font) [-foreground or -fg, foreground, Foreground](options.htm#M-foreground) [-highlightbackground, highlightBackground, HighlightBackground](options.htm#M-highlightbackground) [-highlightcolor, highlightColor, HighlightColor](options.htm#M-highlightcolor) [-highlightthickness, highlightThickness, HighlightThickness](options.htm#M-highlightthickness) [-image, image, Image](options.htm#M-image) [-justify, justify, Justify](options.htm#M-justify) [-padx, padX, Pad](options.htm#M-padx) [-pady, padY, Pad](options.htm#M-pady) [-relief, relief, Relief](options.htm#M-relief) [-takefocus, takeFocus, TakeFocus](options.htm#M-takefocus) [-text, text, Text](options.htm#M-text) [-textvariable, textVariable, Variable](options.htm#M-textvariable) [-underline, underline, Underline](options.htm#M-underline) [-wraplength, wrapLength, WrapLength](options.htm#M-wraplength) [WIDGET-SPECIFIC OPTIONS](menubutton.htm#M5) [-direction, direction, Height](menubutton.htm#M6) [-height, height, Height](menubutton.htm#M7) [-indicatoron, indicatorOn, IndicatorOn](menubutton.htm#M8) [-menu, menu, MenuName](menubutton.htm#M9) [-state, state, State](menubutton.htm#M10) [-width, width, Width](menubutton.htm#M11) [INTRODUCTION](menubutton.htm#M12) [WIDGET COMMAND](menubutton.htm#M13) [*pathName* **cget** *option*](menubutton.htm#M14) [*pathName* **configure** ?*option*? ?*value option value ...*?](menubutton.htm#M15) [DEFAULT BINDINGS](menubutton.htm#M16) [SEE ALSO](menubutton.htm#M17) [KEYWORDS](menubutton.htm#M18) Name ---- menubutton — Create and manipulate 'menubutton' pop-up menu indicator widgets Synopsis -------- **menubutton** *pathName* ?*options*? Standard options ---------------- **[-activebackground, activeBackground, Foreground](options.htm#M-activebackground)** **[-activeforeground, activeForeground, Background](options.htm#M-activeforeground)** **[-anchor, anchor, Anchor](options.htm#M-anchor)** **[-background or -bg, background, Background](options.htm#M-background)** **[-bitmap, bitmap, Bitmap](options.htm#M-bitmap)** **[-borderwidth or -bd, borderWidth, BorderWidth](options.htm#M-borderwidth)** **[-compound, compound, Compound](options.htm#M-compound)** **[-cursor, cursor, Cursor](options.htm#M-cursor)** **[-disabledforeground, disabledForeground, DisabledForeground](options.htm#M-disabledforeground)** **[-font, font, Font](options.htm#M-font)** **[-foreground or -fg, foreground, Foreground](options.htm#M-foreground)** **[-highlightbackground, highlightBackground, HighlightBackground](options.htm#M-highlightbackground)** **[-highlightcolor, highlightColor, HighlightColor](options.htm#M-highlightcolor)** **[-highlightthickness, highlightThickness, HighlightThickness](options.htm#M-highlightthickness)** **[-image, image, Image](options.htm#M-image)** **[-justify, justify, Justify](options.htm#M-justify)** **[-padx, padX, Pad](options.htm#M-padx)** **[-pady, padY, Pad](options.htm#M-pady)** **[-relief, relief, Relief](options.htm#M-relief)** **[-takefocus, takeFocus, TakeFocus](options.htm#M-takefocus)** **[-text, text, Text](options.htm#M-text)** **[-textvariable, textVariable, Variable](options.htm#M-textvariable)** **[-underline, underline, Underline](options.htm#M-underline)** **[-wraplength, wrapLength, WrapLength](options.htm#M-wraplength)** Widget-specific options ----------------------- Command-Line Name: **-direction** Database Name: **direction** Database Class: **Height** Specifies where the menu is going to be popup up. **above** tries to pop the menu above the menubutton. **below** tries to pop the menu below the menubutton. **left** tries to pop the menu to the left of the menubutton. **right** tries to pop the menu to the right of the menu button. **[flush](../tclcmd/flush.htm)** pops the menu directly over the menubutton. In the case of **above** or **below**, the direction will be reversed if the menu would show offscreen. Command-Line Name: **-height** Database Name: **height** Database Class: **Height** Specifies a desired height for the menubutton. If an image or bitmap is being displayed in the menubutton then the value is in screen units (i.e. any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**); for text it is in lines of text. If this option is not specified, the menubutton's desired height is computed from the size of the image or bitmap or text being displayed in it. Command-Line Name: **-indicatoron** Database Name: **indicatorOn** Database Class: **IndicatorOn** The value must be a proper boolean value. If it is true then a small indicator rectangle will be displayed on the right side of the menubutton and the default menu bindings will treat this as an option menubutton. If false then no indicator will be displayed. Command-Line Name: **-menu** Database Name: **[menu](menu.htm)** Database Class: **MenuName** Specifies the path name of the menu associated with this menubutton. The menu must be a child of the menubutton. Command-Line Name: **-state** Database Name: **state** Database Class: **State** Specifies one of three states for the menubutton: **normal**, **active**, or **disabled**. In normal state the menubutton is displayed using the **foreground** and **background** options. The active state is typically used when the pointer is over the menubutton. In active state the menubutton is displayed using the **-activeforeground** and **-activebackground** options. Disabled state means that the menubutton should be insensitive: the default bindings will refuse to activate the widget and will ignore mouse button presses. In this state the **-disabledforeground** and **-background** options determine how the button is displayed. Command-Line Name: **-width** Database Name: **width** Database Class: **Width** Specifies a desired width for the menubutton. If an image or bitmap is being displayed in the menubutton then the value is in screen units (i.e. any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**); for text it is in characters. If this option is not specified, the menubutton's desired width is computed from the size of the image or bitmap or text being displayed in it. Introduction ------------ The **menubutton** command creates a new window (given by the *pathName* argument) and makes it into a menubutton widget. Additional options, described above, may be specified on the command line or in the option database to configure aspects of the menubutton such as its colors, font, text, and initial relief. The **menubutton** command returns its *pathName* argument. At the time this command is invoked, there must not exist a window named *pathName*, but *pathName*'s parent must exist. A menubutton is a widget that displays a textual string, bitmap, or image and is associated with a menu widget. If text is displayed, it must all be in a single font, but it can occupy multiple lines on the screen (if it contains newlines or if wrapping occurs because of the **-wraplength** option) and one of the characters may optionally be underlined using the **-underline** option. In normal usage, pressing mouse button 1 over the menubutton causes the associated menu to be posted just underneath the menubutton. If the mouse is moved over the menu before releasing the mouse button, the button release causes the underlying menu entry to be invoked. When the button is released, the menu is unposted. Menubuttons are used to construct a **tk\_optionMenu**, which is the preferred mechanism for allowing a user to select one item from a list on Mac OS X. Menubuttons were also typically organized into groups called menu bars that allow scanning: if the mouse button is pressed over one menubutton (causing it to post its menu) and the mouse is moved over another menubutton in the same menu bar without releasing the mouse button, then the menu of the first menubutton is unposted and the menu of the new menubutton is posted instead. *This use is deprecated* in favor of setting a **[menu](menu.htm)** directly as a menubar; see the **[toplevel](toplevel.htm)**'s **-menu** option for how to do that. There are several interactions between menubuttons and menus; see the **[menu](menu.htm)** manual entry for information on various menu configurations, such as pulldown menus and option menus. Widget command -------------- The **menubutton** command creates a new Tcl command whose name is *pathName*. This command may be used to invoke various operations on the widget. It has the following general form: ``` *pathName option* ?*arg arg ...*? ``` *Option* and the *arg*s determine the exact behavior of the command. The following commands are possible for menubutton widgets: *pathName* **cget** *option* Returns the current value of the configuration option given by *option*. *Option* may have any of the values accepted by the **menubutton** command. *pathName* **configure** ?*option*? ?*value option value ...*? Query or modify the configuration options of the widget. If no *option* is specified, returns a list describing all of the available options for *pathName* (see **[Tk\_ConfigureInfo](https://www.tcl.tk/man/tcl/TkLib/ConfigWidg.htm)** for information on the format of this list). If *option* is specified with no *value*, then the command returns a list describing the one named option (this list will be identical to the corresponding sublist of the value returned if no *option* is specified). If one or more *option-value* pairs are specified, then the command modifies the given widget option(s) to have the given value(s); in this case the command returns an empty string. *Option* may have any of the values accepted by the **menubutton** command. Default bindings ---------------- Tk automatically creates class bindings for menubuttons that give them the following default behavior: 1. A menubutton activates whenever the mouse passes over it and deactivates whenever the mouse leaves it. 2. Pressing mouse button 1 over a menubutton posts the menubutton: its relief changes to raised and its associated menu is posted under the menubutton. If the mouse is dragged down into the menu with the button still down, and if the mouse button is then released over an entry in the menu, the menubutton is unposted and the menu entry is invoked. 3. If button 1 is pressed over a menubutton and then released over that menubutton, the menubutton stays posted: you can still move the mouse over the menu and click button 1 on an entry to invoke it. Once a menu entry has been invoked, the menubutton unposts itself. 4. If button 1 is pressed over a menubutton and then dragged over some other menubutton, the original menubutton unposts itself and the new menubutton posts. 5. If button 1 is pressed over a menubutton and released outside any menubutton or menu, the menubutton unposts without invoking any menu entry. 6. When a menubutton is posted, its associated menu claims the input focus to allow keyboard traversal of the menu and its submenus. See the **[menu](menu.htm)** manual entry for details on these bindings. 7. If the **-underline** option has been specified for a menubutton then keyboard traversal may be used to post the menubutton: Alt+*x*, where *x* is the underlined character (or its lower-case or upper-case equivalent), may be typed in any window under the menubutton's toplevel to post the menubutton. 8. The F10 key may be typed in any window to post the first menubutton under its toplevel window that is not disabled. 9. If a menubutton has the input focus, the space and return keys post the menubutton. If the menubutton's state is **disabled** then none of the above actions occur: the menubutton is completely non-responsive. The behavior of menubuttons can be changed by defining new bindings for individual widgets or by redefining the class bindings. See also -------- **[ttk::menubutton](ttk_menubutton.htm)**, **[menu](menu.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/menubutton.htm>
programming_docs
tcl_tk tkvars tkvars ====== [NAME](tkvars.htm#M2) geometry, tk\_library, tk\_patchLevel, tk\_strictMotif, tk\_version — Variables used or set by Tk [DESCRIPTION](tkvars.htm#M3) [**tk\_library**](tkvars.htm#M4) [**tk\_patchLevel**](tkvars.htm#M5) [**tk\_strictMotif**](tkvars.htm#M6) [**tk\_version**](tkvars.htm#M7) [INTERNAL AND DEBUGGING VARIABLES](tkvars.htm#M8) [**tk::Priv**](tkvars.htm#M9) [**tk\_textRedraw**](tkvars.htm#M10) [**tk\_textRelayout**](tkvars.htm#M11) [OTHER GLOBAL VARIABLES](tkvars.htm#M12) [**geometry**](tkvars.htm#M13) [SEE ALSO](tkvars.htm#M14) [KEYWORDS](tkvars.htm#M15) Name ---- geometry, tk\_library, tk\_patchLevel, tk\_strictMotif, tk\_version — Variables used or set by Tk Description ----------- The following Tcl variables are either set or used by Tk at various times in its execution: **tk\_library** This variable holds the file name for a directory containing a library of Tcl scripts related to Tk. These scripts include an initialization file that is normally processed whenever a Tk application starts up, plus other files containing procedures that implement default behaviors for widgets. The initial value of **[tcl\_library](../tclcmd/tclvars.htm)** is set when Tk is added to an interpreter; this is done by searching several different directories until one is found that contains an appropriate Tk startup script. If the **TK\_LIBRARY** environment variable exists, then the directory it names is checked first. If **TK\_LIBRARY** is not set or does not refer to an appropriate directory, then Tk checks several other directories based on a compiled-in default location, the location of the Tcl library directory, the location of the binary containing the application, and the current working directory. The variable can be modified by an application to switch to a different library. **tk\_patchLevel** Contains a dot-separated sequence of decimal integers giving the current patch level for Tk. The patch level is incremented for each new release or patch, and it uniquely identifies an official version of Tk. This value is normally the same as the result of “**[package require](../tclcmd/package.htm)** **Tk**”. **tk\_strictMotif** This variable is set to zero by default. If an application sets it to one, then Tk attempts to adhere as closely as possible to Motif look-and-feel standards. For example, active elements such as buttons and scrollbar sliders will not change color when the pointer passes over them. Modern applications should not normally set this variable. **tk\_version** Tk sets this variable in the interpreter for each application. The variable holds the current version number of the Tk library in the form *major*.*minor*. *Major* and *minor* are integers. The major version number increases in any Tk release that includes changes that are not backward compatible (i.e. whenever existing Tk applications and scripts may have to change to work with the new release). The minor version number increases with each new release of Tk, except that it resets to zero whenever the major version number changes. ### Internal and debugging variables These variables should not normally be set by user code. **tk::Priv** This variable is an array containing several pieces of information that are private to Tk. The elements of **tk::Priv** are used by Tk library procedures and default bindings. They should not be accessed by any code outside Tk. **tk\_textRedraw** **tk\_textRelayout** These variables are set by text widgets when they have debugging turned on. The values written to these variables can be used to test or debug text widget operations. These variables are mostly used by Tk's test suite. Other global variables ---------------------- The following variables are only guaranteed to exist in **[wish](../usercmd/wish.htm)** executables; the Tk library does not define them itself but many Tk environments do. **geometry** If set, contains the user-supplied geometry specification to use for the main Tk window. See also -------- **[package](../tclcmd/package.htm)**, **[tclvars](../tclcmd/tclvars.htm)**, **[wish](../usercmd/wish.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/tkvars.htm> tcl_tk tk tk == [NAME](tk.htm#M2) tk — Manipulate Tk internal state [SYNOPSIS](tk.htm#M3) [DESCRIPTION](tk.htm#M4) [**tk appname** ?*newName*?](tk.htm#M5) [**tk busy** *subcommand* ...](tk.htm#M6) [**tk caret** *window* ?**-x** *x*? ?**-y** *y*? ?**-height** *height*?](tk.htm#M7) [**tk inactive** ?**-displayof** *window*? ?**reset**?](tk.htm#M8) [**tk fontchooser** *subcommand* ...](tk.htm#M9) [**tk scaling** ?**-displayof** *window*? ?*number*?](tk.htm#M10) [**tk useinputmethods** ?**-displayof** *window*? ?*boolean*?](tk.htm#M11) [**tk windowingsystem**](tk.htm#M12) [SEE ALSO](tk.htm#M13) [KEYWORDS](tk.htm#M14) Name ---- tk — Manipulate Tk internal state Synopsis -------- **tk** *option* ?*arg arg ...*? Description ----------- The **tk** command provides access to miscellaneous elements of Tk's internal state. Most of the information manipulated by this command pertains to the application as a whole, or to a screen or display, rather than to a particular window. The command can take any of a number of different forms depending on the *option* argument. The legal forms are: **tk appname** ?*newName*? If *newName* is not specified, this command returns the name of the application (the name that may be used in **[send](send.htm)** commands to communicate with the application). If *newName* is specified, then the name of the application is changed to *newName*. If the given name is already in use, then a suffix of the form “ **#2**” or “ **#3**” is appended in order to make the name unique. The command's result is the name actually chosen. *newName* should not start with a capital letter. This will interfere with option processing, since names starting with capitals are assumed to be classes; as a result, Tk may not be able to find some options for the application. If sends have been disabled by deleting the **[send](send.htm)** command, this command will reenable them and recreate the **[send](send.htm)** command. **tk busy** *subcommand* ... This command controls the marking of window hierarchies as “busy”, rendering them non-interactive while some other operation is proceeding. For more details see the **[busy](busy.htm)** manual page. **tk caret** *window* ?**-x** *x*? ?**-y** *y*? ?**-height** *height*? Sets and queries the caret location for the display of the specified Tk window *window*. The caret is the per-display cursor location used for indicating global focus (e.g. to comply with Microsoft Accessibility guidelines), as well as for location of the over-the-spot XIM (X Input Methods) or Windows IME windows. If no options are specified, the last values used for setting the caret are return in option-value pair format. **-x** and **-y** represent window-relative coordinates, and **-height** is the height of the current cursor location, or the height of the specified *window* if none is given. **tk inactive** ?**-displayof** *window*? ?**reset**? Returns a positive integer, the number of milliseconds since the last time the user interacted with the system. If the **-displayof** option is given then the return value refers to the display of *window*; otherwise it refers to the display of the application's main window. **tk inactive** will return -1, if querying the user inactive time is not supported by the system, and in safe interpreters. If the literal string **reset** is given as an additional argument, the timer is reset and an empty string is returned. Resetting the inactivity time is forbidden in safe interpreters and will throw an error if tried. **tk fontchooser** *subcommand* ... Controls the Tk font selection dialog. For more details see the **[fontchooser](fontchooser.htm)** manual page. **tk scaling** ?**-displayof** *window*? ?*number*? Sets and queries the current scaling factor used by Tk to convert between physical units (for example, points, inches, or millimeters) and pixels. The *number* argument is a floating point number that specifies the number of pixels per point on *window*'s display. If the *window* argument is omitted, it defaults to the main window. If the *number* argument is omitted, the current value of the scaling factor is returned. A “point” is a unit of measurement equal to 1/72 inch. A scaling factor of 1.0 corresponds to 1 pixel per point, which is equivalent to a standard 72 dpi monitor. A scaling factor of 1.25 would mean 1.25 pixels per point, which is the setting for a 90 dpi monitor; setting the scaling factor to 1.25 on a 72 dpi monitor would cause everything in the application to be displayed 1.25 times as large as normal. The initial value for the scaling factor is set when the application starts, based on properties of the installed monitor, but it can be changed at any time. Measurements made after the scaling factor is changed will use the new scaling factor, but it is undefined whether existing widgets will resize themselves dynamically to accommodate the new scaling factor. **tk useinputmethods** ?**-displayof** *window*? ?*boolean*? Sets and queries the state of whether Tk should use XIM (X Input Methods) for filtering events. The resulting state is returned. XIM is used in some locales (i.e., Japanese, Korean), to handle special input devices. This feature is only significant on X. If XIM support is not available, this will always return 0. If the *window* argument is omitted, it defaults to the main window. If the *boolean* argument is omitted, the current state is returned. This is turned on by default for the main display. **tk windowingsystem** Returns the current Tk windowing system, one of **x11** (X11-based), **win32** (MS Windows), or **aqua** (Mac OS X Aqua). See also -------- **[busy](busy.htm)**, **[fontchooser](fontchooser.htm)**, **[send](send.htm)**, **[winfo](winfo.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/tk.htm> tcl_tk cursors cursors ======= Name ---- cursors — mouse cursors available in Tk Description ----------- The **-cursor** widget option allows a Tk programmer to change the mouse cursor for a particular widget. The cursor names recognized by Tk on all platforms are: ``` X_cursor arrow based_arrow_down based_arrow_up boat bogosity bottom_left_corner bottom_right_corner bottom_side bottom_tee box_spiral center_ptr circle clock coffee_mug cross cross_reverse crosshair diamond_cross dot dotbox double_arrow draft_large draft_small draped_box exchange fleur gobbler gumby hand1 hand2 heart icon iron_cross left_ptr left_side left_tee leftbutton ll_angle lr_angle man middlebutton mouse none pencil pirate plus question_arrow right_ptr right_side right_tee rightbutton rtl_logo sailboat sb_down_arrow sb_h_double_arrow sb_left_arrow sb_right_arrow sb_up_arrow sb_v_double_arrow shuttle sizing spider spraycan star target tcross top_left_arrow top_left_corner top_right_corner top_side top_tee trek ul_angle umbrella ur_angle watch xterm ``` The **none** cursor can be specified to eliminate the cursor. Portability issues ------------------ **Windows** On Windows systems, the following cursors are mapped to native cursors: ``` arrow center_ptr crosshair fleur ibeam icon none sb_h_double_arrow sb_v_double_arrow watch xterm ``` And the following additional cursors are available: ``` no starting size size_ne_sw size_ns size_nw_se size_we uparrow wait ``` **Mac OS X** On Mac OS X systems, the following cursors are mapped to native cursors: ``` arrow top_left_arrow left_ptr cross crosshair tcross ibeam none xterm ``` And the following additional native cursors are available: ``` copyarrow aliasarrow contextualmenuarrow movearrow text cross-hair hand openhand closedhand fist pointinghand resize resizeleft resizeright resizeleftright resizeup resizedown resizeupdown resizebottomleft resizetopleft resizebottomright resizetopright notallowed poof wait countinguphand countingdownhand countingupanddownhand spinning help bucket cancel eyedrop eyedrop-full zoom-in zoom-out ``` Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/cursors.htm> tcl_tk ttk_entry ttk\_entry ========== [NAME](ttk_entry.htm#M2) ttk::entry — Editable text field widget [SYNOPSIS](ttk_entry.htm#M3) [DESCRIPTION](ttk_entry.htm#M4) [STANDARD OPTIONS](ttk_entry.htm#M5) [-class, undefined, undefined](ttk_widget.htm#M-class) [-cursor, cursor, Cursor](ttk_widget.htm#M-cursor) [-style, style, Style](ttk_widget.htm#M-style) [-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus) [-xscrollcommand, xScrollCommand, ScrollCommand](ttk_widget.htm#M-xscrollcommand) [WIDGET-SPECIFIC OPTIONS](ttk_entry.htm#M6) [-exportselection, exportSelection, ExportSelection](ttk_entry.htm#M7) [-invalidcommand, invalidCommand, InvalidCommand](ttk_entry.htm#M8) [-justify, justify, Justify](ttk_entry.htm#M9) [-show, show, Show](ttk_entry.htm#M10) [-state, state, State](ttk_entry.htm#M11) [-textvariable, textVariable, Variable](ttk_entry.htm#M12) [-validate, validate, Validate](ttk_entry.htm#M-validate) [-validatecommand, validateCommand, ValidateCommand](ttk_entry.htm#M-validatecommand) [-width, width, Width](ttk_entry.htm#M13) [NOTES](ttk_entry.htm#M14) [INDICES](ttk_entry.htm#M15) [*number*](ttk_entry.htm#M16) [**@***number*](ttk_entry.htm#M17) [**end**](ttk_entry.htm#M18) [**insert**](ttk_entry.htm#M19) [**sel.first**](ttk_entry.htm#M20) [**sel.last**](ttk_entry.htm#M21) [WIDGET COMMAND](ttk_entry.htm#M22) [*pathName* **bbox** *index*](ttk_entry.htm#M23) [*pathName* **delete** *first* ?*last*?](ttk_entry.htm#M24) [*pathName* **get**](ttk_entry.htm#M25) [*pathName* **icursor** *index*](ttk_entry.htm#M26) [*pathName* **index** *index*](ttk_entry.htm#M27) [*pathName* **insert** *index string*](ttk_entry.htm#M28) [*pathName* **selection** *option arg*](ttk_entry.htm#M29) [*pathName* **selection clear**](ttk_entry.htm#M30) [*pathName* **selection present**](ttk_entry.htm#M31) [*pathName* **selection range** *start* *end*](ttk_entry.htm#M32) [*pathName* **validate**](ttk_entry.htm#M33) [*pathName* **xview** *args*](ttk_entry.htm#M34) [*pathName* **xview**](ttk_entry.htm#M35) [*pathName* **xview** *index*](ttk_entry.htm#M36) [*pathName* **xview moveto** *fraction*](ttk_entry.htm#M37) [*pathName* **xview scroll** *number what*](ttk_entry.htm#M38) [VALIDATION](ttk_entry.htm#M39) [VALIDATION MODES](ttk_entry.htm#M40) [**none**](ttk_entry.htm#M41) [**key**](ttk_entry.htm#M42) [**focus**](ttk_entry.htm#M43) [**focusin**](ttk_entry.htm#M44) [**focusout**](ttk_entry.htm#M45) [**all**](ttk_entry.htm#M46) [VALIDATION SCRIPT SUBSTITUTIONS](ttk_entry.htm#M47) [**%d**](ttk_entry.htm#M48) [**%i**](ttk_entry.htm#M49) [**%P**](ttk_entry.htm#M50) [**%s**](ttk_entry.htm#M51) [**%S**](ttk_entry.htm#M52) [**%v**](ttk_entry.htm#M53) [**%V**](ttk_entry.htm#M54) [**%W**](ttk_entry.htm#M55) [DIFFERENCES FROM TK ENTRY WIDGET VALIDATION](ttk_entry.htm#M56) [DEFAULT BINDINGS](ttk_entry.htm#M57) [WIDGET STATES](ttk_entry.htm#M58) [SEE ALSO](ttk_entry.htm#M59) [KEYWORDS](ttk_entry.htm#M60) Name ---- ttk::entry — Editable text field widget Synopsis -------- **ttk::entry** *pathName* ?*options*? Description ----------- An **ttk::entry** widget displays a one-line text string and allows that string to be edited by the user. The value of the string may be linked to a Tcl variable with the **-textvariable** option. Entry widgets support horizontal scrolling with the standard **-xscrollcommand** option and **xview** widget command. Standard options ---------------- **[-class, undefined, undefined](ttk_widget.htm#M-class)** **[-cursor, cursor, Cursor](ttk_widget.htm#M-cursor)** **[-style, style, Style](ttk_widget.htm#M-style)** **[-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus)** **[-xscrollcommand, xScrollCommand, ScrollCommand](ttk_widget.htm#M-xscrollcommand)** Widget-specific options ----------------------- Command-Line Name: **-exportselection** Database Name: **exportSelection** Database Class: **ExportSelection** A boolean value specifying whether or not a selection in the widget should be linked to the X selection. If the selection is exported, then selecting in the widget deselects the current X selection, selecting outside the widget deselects any widget selection, and the widget will respond to selection retrieval requests when it has a selection. Command-Line Name: **-invalidcommand** Database Name: **invalidCommand** Database Class: **InvalidCommand** A script template to evaluate whenever the **-validatecommand** returns 0. See **[VALIDATION](#M39)** below for more information. Command-Line Name: **-justify** Database Name: **justify** Database Class: **Justify** Specifies how the text is aligned within the entry widget. One of **left**, **center**, or **right**. Command-Line Name: **-show** Database Name: **show** Database Class: **Show** If this option is specified, then the true contents of the entry are not displayed in the window. Instead, each character in the entry's value will be displayed as the first character in the value of this option, such as “\*” or a bullet. This is useful, for example, if the entry is to be used to enter a password. If characters in the entry are selected and copied elsewhere, the information copied will be what is displayed, not the true contents of the entry. Command-Line Name: **-state** Database Name: **state** Database Class: **State** Compatibility option; see *ttk::widget(n)* for details. Specifies one of three states for the entry, **normal**, **disabled**, or **readonly**. See **[WIDGET STATES](#M58)**, below. Command-Line Name: **-textvariable** Database Name: **textVariable** Database Class: **Variable** Specifies the name of a global variable whose value is linked to the entry widget's contents. Whenever the variable changes value, the widget's contents are updated, and vice versa. Command-Line Name: **-validate** Database Name: **validate** Database Class: **Validate** Specifies the mode in which validation should operate: **none**, **focus**, **focusin**, **focusout**, **key**, or **all**. Default is **none**, meaning that validation is disabled. See **[VALIDATION](#M39)** below. Command-Line Name: **-validatecommand** Database Name: **validateCommand** Database Class: **ValidateCommand** A script template to evaluate whenever validation is triggered. If set to the empty string (the default), validation is disabled. The script must return a boolean value. See **[VALIDATION](#M39)** below. Command-Line Name: **-width** Database Name: **width** Database Class: **Width** Specifies an integer value indicating the desired width of the entry window, in average-size characters of the widget's font. Notes ----- A portion of the entry may be selected as described below. If an entry is exporting its selection (see the **-exportselection** option), then it will observe the standard X11 protocols for handling the selection; entry selections are available as type **[STRING](../tclcmd/string.htm)**. Entries also observe the standard Tk rules for dealing with the input focus. When an entry has the input focus it displays an *insert cursor* to indicate where new characters will be inserted. Entries are capable of displaying strings that are too long to fit entirely within the widget's window. In this case, only a portion of the string will be displayed; commands described below may be used to change the view in the window. Entries use the standard **-xscrollcommand** mechanism for interacting with scrollbars (see the description of the **-xscrollcommand** option for details). Indices ------- Many of the **[entry](entry.htm)** widget commands take one or more indices as arguments. An index specifies a particular character in the entry's string, in any of the following ways: *number* Specifies the character as a numerical index, where 0 corresponds to the first character in the string. **@***number* In this form, *number* is treated as an x-coordinate in the entry's window; the character spanning that x-coordinate is used. For example, “**@0**” indicates the left-most character in the window. **end** Indicates the character just after the last one in the entry's string. This is equivalent to specifying a numerical index equal to the length of the entry's string. **insert** Indicates the character adjacent to and immediately following the insert cursor. **sel.first** Indicates the first character in the selection. It is an error to use this form if the selection is not in the entry window. **sel.last** Indicates the character just after the last one in the selection. It is an error to use this form if the selection is not in the entry window. Abbreviations may be used for any of the forms above, e.g. “**e**” or “**sel.l**”. In general, out-of-range indices are automatically rounded to the nearest legal value. Widget command -------------- The following subcommands are possible for entry widgets: *pathName* **bbox** *index* Returns a list of four numbers describing the bounding box of the character given by *index*. The first two elements of the list give the x and y coordinates of the upper-left corner of the screen area covered by the character (in pixels relative to the widget) and the last two elements give the width and height of the character, in pixels. The bounding box may refer to a region outside the visible area of the window. *pathName* **delete** *first* ?*last*? Delete one or more elements of the entry. *First* is the index of the first character to delete, and *last* is the index of the character just after the last one to delete. If *last* is not specified it defaults to *first*+1, i.e. a single character is deleted. This command returns the empty string. *pathName* **get** Returns the entry's string. *pathName* **icursor** *index* Arrange for the insert cursor to be displayed just before the character given by *index*. Returns the empty string. *pathName* **index** *index* Returns the numerical index corresponding to *index*. *pathName* **insert** *index string* Insert *string* just before the character indicated by *index*. Returns the empty string. *pathName* **selection** *option arg* This command is used to adjust the selection within an entry. It has several forms, depending on *option*: *pathName* **selection clear** Clear the selection if it is currently in this widget. If the selection is not in this widget then the command has no effect. Returns the empty string. *pathName* **selection present** Returns 1 if there is are characters selected in the entry, 0 if nothing is selected. *pathName* **selection range** *start* *end* Sets the selection to include the characters starting with the one indexed by *start* and ending with the one just before *end*. If *end* refers to the same character as *start* or an earlier one, then the entry's selection is cleared. *pathName* **validate** Force revalidation, independent of the conditions specified by the **-validate** option. Returns 0 if validation fails, 1 if it succeeds. Sets or clears the **invalid** state accordingly. See **[VALIDATION](#M39)** below for more details. *pathName* **xview** *args* This command is used to query and change the horizontal position of the text in the widget's window. It can take any of the following forms: *pathName* **xview** Returns a list containing two elements. Each element is a real fraction between 0 and 1; together they describe the horizontal span that is visible in the window. For example, if the first element is .2 and the second element is .6, 20% of the entry's text is off-screen to the left, the middle 40% is visible in the window, and 40% of the text is off-screen to the right. These are the same values passed to scrollbars via the **-xscrollcommand** option. *pathName* **xview** *index* Adjusts the view in the window so that the character given by *index* is displayed at the left edge of the window. *pathName* **xview moveto** *fraction* Adjusts the view in the window so that the character *fraction* of the way through the text appears at the left edge of the window. *Fraction* must be a fraction between 0 and 1. *pathName* **xview scroll** *number what* This command shifts the view in the window left or right according to *number* and *what*. *Number* must be an integer. *What* must be either **units** or **pages**. If *what* is **units**, the view adjusts left or right by *number* average-width characters on the display; if it is **pages** then the view adjusts by *number* screenfuls. If *number* is negative then characters farther to the left become visible; if it is positive then characters farther to the right become visible. The entry widget also supports the following generic **[ttk::widget](ttk_widget.htm)** widget subcommands (see *ttk::widget(n)* for details): | | | | | --- | --- | --- | | **cget** | **configure** | **identify** | | **instate** | **state** | Validation ---------- The **-validate**, **-validatecommand**, and **-invalidcommand** options are used to enable entry widget validation. ### Validation modes There are two main validation modes: *prevalidation*, in which the **-validatecommand** is evaluated prior to each edit and the return value is used to determine whether to accept or reject the change; and *revalidation*, in which the **-validatecommand** is evaluated to determine whether the current value is valid. The **-validate** option determines when validation occurs; it may be set to any of the following values: **none** Default. This means validation will only occur when specifically requested by the **validate** widget command. **key** The entry will be prevalidated prior to each edit (specifically, whenever the **insert** or **delete** widget commands are called). If prevalidation fails, the edit is rejected. **focus** The entry is revalidated when the entry receives or loses focus. **focusin** The entry is revalidated when the entry receives focus. **focusout** The entry is revalidated when the entry loses focus. **all** Validation is performed for all above conditions. The **-invalidcommand** is evaluated whenever the **-validatecommand** returns a false value. The **-validatecommand** and **-invalidcommand** may modify the entry widget's value via the widget **insert** or **delete** commands, or by setting the linked **-textvariable**. If either does so during prevalidation, then the edit is rejected regardless of the value returned by the **-validatecommand**. If **-validatecommand** is empty (the default), validation always succeeds. ### Validation script substitutions It is possible to perform percent substitutions on the **-validatecommand** and **-invalidcommand**, just as in a **[bind](bind.htm)** script. The following substitutions are recognized: **%d** Type of action: 1 for **insert** prevalidation, 0 for **delete** prevalidation, or -1 for revalidation. **%i** Index of character string to be inserted/deleted, if any, otherwise -1. **%P** In prevalidation, the new value of the entry if the edit is accepted. In revalidation, the current value of the entry. **%s** The current value of entry prior to editing. **%S** The text string being inserted/deleted, if any, {} otherwise. **%v** The current value of the **-validate** option. **%V** The validation condition that triggered the callback (**key**, **focusin**, **focusout**, or **forced**). **%W** The name of the entry widget. ### Differences from tk entry widget validation The standard Tk entry widget automatically disables validation (by setting **-validate** to **none**) if the **-validatecommand** or **-invalidcommand** modifies the entry's value. The Tk themed entry widget only disables validation if one of the validation scripts raises an error, or if **-validatecommand** does not return a valid boolean value. (Thus, it is not necessary to re-enable validation after modifying the entry value in a validation script). In addition, the standard entry widget invokes validation whenever the linked **-textvariable** is modified; the Tk themed entry widget does not. Default bindings ---------------- The entry widget's default bindings enable the following behavior. In the descriptions below, “word” refers to a contiguous group of letters, digits, or “\_” characters, or any single character other than these. * Clicking mouse button 1 positions the insert cursor just before the character underneath the mouse cursor, sets the input focus to this widget, and clears any selection in the widget. Dragging with mouse button 1 down strokes out a selection between the insert cursor and the character under the mouse. * Double-clicking with mouse button 1 selects the word under the mouse and positions the insert cursor at the end of the word. Dragging after a double click strokes out a selection consisting of whole words. * Triple-clicking with mouse button 1 selects all of the text in the entry and positions the insert cursor at the end of the line. * The ends of the selection can be adjusted by dragging with mouse button 1 while the Shift key is down. If the button is double-clicked before dragging then the selection will be adjusted in units of whole words. * Clicking mouse button 1 with the Control key down will position the insert cursor in the entry without affecting the selection. * If any normal printing characters are typed in an entry, they are inserted at the point of the insert cursor. * The view in the entry can be adjusted by dragging with mouse button 2. If mouse button 2 is clicked without moving the mouse, the selection is copied into the entry at the position of the mouse cursor. * If the mouse is dragged out of the entry on the left or right sides while button 1 is pressed, the entry will automatically scroll to make more text visible (if there is more text off-screen on the side where the mouse left the window). * The Left and Right keys move the insert cursor one character to the left or right; they also clear any selection in the entry. If Left or Right is typed with the Shift key down, then the insertion cursor moves and the selection is extended to include the new character. Control-Left and Control-Right move the insert cursor by words, and Control-Shift-Left and Control-Shift-Right move the insert cursor by words and also extend the selection. Control-b and Control-f behave the same as Left and Right, respectively. * The Home key and Control-a move the insert cursor to the beginning of the entry and clear any selection in the entry. Shift-Home moves the insert cursor to the beginning of the entry and extends the selection to that point. * The End key and Control-e move the insert cursor to the end of the entry and clear any selection in the entry. Shift-End moves the cursor to the end and extends the selection to that point. * Control-/ selects all the text in the entry. * Control-\ clears any selection in the entry. * The standard Tk <<Cut>>, <<Copy>>, <<Paste>>, and <<Clear>> virtual events operate on the selection in the expected manner. * The Delete key deletes the selection, if there is one in the entry. If there is no selection, it deletes the character to the right of the insert cursor. * The BackSpace key and Control-h delete the selection, if there is one in the entry. If there is no selection, it deletes the character to the left of the insert cursor. * Control-d deletes the character to the right of the insert cursor. * Control-k deletes all the characters to the right of the insertion cursor. Widget states ------------- In the **disabled** state, the entry cannot be edited and the text cannot be selected. In the **readonly** state, no insert cursor is displayed and the entry cannot be edited (specifically: the **insert** and **delete** commands have no effect). The **disabled** state is the same as **readonly**, and in addition text cannot be selected. Note that changes to the linked **-textvariable** will still be reflected in the entry, even if it is disabled or readonly. Typically, the text is “grayed-out” in the **disabled** state, and a different background is used in the **readonly** state. The entry widget sets the **invalid** state if revalidation fails, and clears it whenever validation succeeds. See also -------- **[ttk::widget](ttk_widget.htm)**, **[entry](entry.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/ttk_entry.htm>
programming_docs
tcl_tk tkwait tkwait ====== Name ---- tkwait — Wait for variable to change or window to be destroyed Synopsis -------- **tkwait variable** *name* **tkwait visibility** *name* **tkwait window** *name* Description ----------- The **tkwait** command waits for one of several things to happen, then it returns without taking any other actions. The return value is always an empty string. If the first argument is **variable** (or any abbreviation of it) then the second argument is the name of a global variable and the command waits for that variable to be modified. If the first argument is **visibility** (or any abbreviation of it) then the second argument is the name of a window and the **tkwait** command waits for a change in its visibility state (as indicated by the arrival of a VisibilityNotify event). This form is typically used to wait for a newly-created window to appear on the screen before taking some action. If the first argument is **window** (or any abbreviation of it) then the second argument is the name of a window and the **tkwait** command waits for that window to be destroyed. This form is typically used to wait for a user to finish interacting with a dialog box before using the result of that interaction. While the **tkwait** command is waiting it processes events in the normal fashion, so the application will continue to respond to user interactions. If an event handler invokes **tkwait** again, the nested call to **tkwait** must complete before the outer call can complete. See also -------- **[bind](bind.htm)**, **[vwait](../tclcmd/vwait.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/tkwait.htm> tcl_tk busy busy ==== [NAME](busy.htm#M2) busy — confine pointer and keyboard events to a window sub-tree [SYNOPSIS](busy.htm#M3) [DESCRIPTION](busy.htm#M4) [INTRODUCTION](busy.htm#M5) [EXAMPLE](busy.htm#M6) [OPERATIONS](busy.htm#M7) [**tk busy** *window* ?*option value*?...](busy.htm#M8) [**tk busy hold** *window* ?*option value*?...](busy.htm#M9) [**-cursor** *cursorName*](busy.htm#M10) [**tk busy cget** *window* *option*](busy.htm#M11) [**tk busy configure** *window* ?*option value*?...](busy.htm#M12) [**tk busy forget** *window* ?*window*?...](busy.htm#M13) [**tk busy current** ?*pattern*?](busy.htm#M14) [**tk busy status** *window*](busy.htm#M15) [EVENT HANDLING](busy.htm#M16) [BINDINGS](busy.htm#M17) [ENTER/LEAVE EVENTS](busy.htm#M18) [KEYBOARD EVENTS](busy.htm#M19) [PORTABILITY](busy.htm#M20) [SEE ALSO](busy.htm#M21) [KEYWORDS](busy.htm#M22) Name ---- busy — confine pointer and keyboard events to a window sub-tree Synopsis -------- **tk busy** *window* ?*options*? **tk busy hold** *window* ?*options*? **tk busy configure** *window* ?*option value*?... **tk busy forget** *window* ?*window* ?... **tk busy current** ?*pattern*? **tk busy status** *window* Description ----------- The **tk busy** command provides a simple means to block keyboard, button, and pointer events from Tk widgets, while overriding the widget's cursor with a configurable busy cursor. Introduction ------------ There are many times in applications where you want to temporarily restrict what actions the user can take. For example, an application could have a “Run” button that when pressed causes some processing to occur. However, while the application is busy processing, you probably don't want the user to be able to click the “Run” button again. You may also want restrict the user from other tasks such as clicking a “Print” button. The **tk busy** command lets you make Tk widgets busy. This means that user interactions such as button clicks, moving the mouse, typing at the keyboard, etc. are ignored by the widget. You can set a special cursor (like a watch) that overrides the widget's normal cursor, providing feedback that the application (widget) is temporarily busy. When a widget is made busy, the widget and all of its descendants will ignore events. It's easy to make an entire panel of widgets busy. You can simply make the toplevel widget (such as “.”) busy. This is easier and far much more efficient than recursively traversing the widget hierarchy, disabling each widget and re-configuring its cursor. Often, the **tk busy** command can be used instead of Tk's **[grab](grab.htm)** command. Unlike **[grab](grab.htm)** which restricts all user interactions to one widget, with the **tk busy** command you can have more than one widget active (for example, a “Cancel” dialog and a “Help” button). ### Example You can make several widgets busy by simply making its ancestor widget busy using the **hold** operation. ``` frame .top button .top.button; canvas .top.canvas pack .top.button .top.canvas pack .top # . . . **tk busy** hold .top update ``` All the widgets within **.top** (including **.top**) are now busy. Using **[update](../tclcmd/update.htm)** insures that **tk busy** command will take effect before any other user events can occur. When the application is no longer busy processing, you can allow user interactions again and free any resources it allocated by the **forget** operation. ``` **tk busy** forget .top ``` The busy window has a configurable cursor. You can change the busy cursor using the **configure** operation. ``` **tk busy** configure .top -cursor "watch" ``` Destroying the widget will also clean up any resources allocated by the **tk busy** command. Operations ---------- The following operations are available for the **tk busy** command: **tk busy** *window* ?*option value*?... Shortcut for **tk busy hold** command. **tk busy hold** *window* ?*option value*?... Makes the specified *window* (and its descendants in the Tk window hierarchy) appear busy. *Window* must be a valid path name of a Tk widget. A transparent window is put in front of the specified window. This transparent window is mapped the next time idle tasks are processed, and the specified window and its descendants will be blocked from user interactions. Normally **[update](../tclcmd/update.htm)** should be called immediately afterward to insure that the hold operation is in effect before the application starts its processing. The following configuration options are valid: **-cursor** *cursorName* Specifies the cursor to be displayed when the widget is made busy. *CursorName* can be in any form accepted by **[Tk\_GetCursor](https://www.tcl.tk/man/tcl/TkLib/GetCursor.htm)**. The default cursor is **wait** on Windows and **watch** on other platforms. **tk busy cget** *window* *option* Queries the **tk busy** command configuration options for *window*. *Window* must be the path name of a widget previously made busy by the **hold** operation. The command returns the present value of the specified *option*. *Option* may have any of the values accepted by the **hold** operation. **tk busy configure** *window* ?*option value*?... Queries or modifies the **tk busy** command configuration options for *window*. *Window* must be the path name of a widget previously made busy by the **hold** operation. If no options are specified, a list describing all of the available options for *window* (see **[Tk\_ConfigureInfo](https://www.tcl.tk/man/tcl/TkLib/ConfigWidg.htm)** for information on the format of this list) is returned. If *option* is specified with no *value*, then the command returns a list describing the one named option (this list will be identical to the corresponding sublist of the value returned if no *option* is specified). If one or more *option-value* pairs are specified, then the command modifies the given widget option(s) to have the given value(s); in this case the command returns the empty string. *Option* may have any of the values accepted by the **hold** operation. Please note that the option database is referenced through *window*. For example, if the widget **.frame** is to be made busy, the busy cursor can be specified for it by either **[option](option.htm)** command: ``` option add *frame.busyCursor gumby option add *Frame.BusyCursor gumby ``` **tk busy forget** *window* ?*window*?... Releases resources allocated by the **tk busy** command for *window*, including the transparent window. User events will again be received by *window*. Resources are also released when *window* is destroyed. *Window* must be the name of a widget specified in the **hold** operation, otherwise an error is reported. **tk busy current** ?*pattern*? Returns the pathnames of all widgets that are currently busy. If a *pattern* is given, only the path names of busy widgets matching *pattern* are returned. **tk busy status** *window* Returns the status of a widget *window*. If *window* presently can not receive user interactions, **1** is returned, otherwise **0**. Event handling -------------- ### Bindings The event blocking feature is implemented by creating and mapping a transparent window that completely covers the widget. When the busy window is mapped, it invisibly shields the widget and its hierarchy from all events that may be sent. Like Tk widgets, busy windows have widget names in the Tk window hierarchy. This means that you can use the **[bind](bind.htm)** command, to handle events in the busy window. ``` **tk busy** hold .frame.canvas bind .frame.canvas_Busy <Enter> { ... } ``` Normally the busy window is a sibling of the widget. The name of the busy window is “*widget***\_Busy**” where *widget* is the name of the widget to be made busy. In the previous example, the pathname of the busy window is “**.frame.canvas\_Busy**”. The exception is when the widget is a toplevel widget (such as “.”) where the busy window can't be made a sibling. The busy window is then a child of the widget named “*widget***.\_Busy**” where *widget* is the name of the toplevel widget. In the following example, the pathname of the busy window is “**.\_Busy**”. ``` **tk busy** hold . bind ._Busy <Enter> { ... } ``` ### Enter/leave events Mapping and unmapping busy windows generates Enter/Leave events for all widgets they cover. Please note this if you are tracking Enter/Leave events in widgets. ### Keyboard events When a widget is made busy, the widget is prevented from gaining the keyboard focus by the busy window. But if the widget already had focus, it still may received keyboard events. To prevent this, you must move focus to another window. ``` **tk busy** hold .frame label .dummy focus .dummy update ``` The above example moves the focus from .frame immediately after invoking the **hold** so that no keyboard events will be sent to **.frame** or any of its descendants. Portability ----------- Note that the **tk busy** command does not currently have any effect on OSX when Tk is built using Aqua support. See also -------- **[grab](grab.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/busy.htm> tcl_tk options options ======= [NAME](options.htm#M2) options — Standard options supported by widgets [DESCRIPTION](options.htm#M3) [-activebackground, activeBackground, Foreground](options.htm#M-activebackground) [-activeborderwidth, activeBorderWidth, BorderWidth](options.htm#M-activeborderwidth) [-activeforeground, activeForeground, Background](options.htm#M-activeforeground) [-anchor, anchor, Anchor](options.htm#M-anchor) [-background or -bg, background, Background](options.htm#M-background) [-bitmap, bitmap, Bitmap](options.htm#M-bitmap) [-borderwidth or -bd, borderWidth, BorderWidth](options.htm#M-borderwidth) [-cursor, cursor, Cursor](options.htm#M-cursor) [-compound, compound, Compound](options.htm#M-compound) [-disabledforeground, disabledForeground, DisabledForeground](options.htm#M-disabledforeground) [-exportselection, exportSelection, ExportSelection](options.htm#M-exportselection) [-font, font, Font](options.htm#M-font) [-foreground or -fg, foreground, Foreground](options.htm#M-foreground) [-highlightbackground, highlightBackground, HighlightBackground](options.htm#M-highlightbackground) [-highlightcolor, highlightColor, HighlightColor](options.htm#M-highlightcolor) [-highlightthickness, highlightThickness, HighlightThickness](options.htm#M-highlightthickness) [-image, image, Image](options.htm#M-image) [-insertbackground, insertBackground, Foreground](options.htm#M-insertbackground) [-insertborderwidth, insertBorderWidth, BorderWidth](options.htm#M-insertborderwidth) [-insertofftime, insertOffTime, OffTime](options.htm#M-insertofftime) [-insertontime, insertOnTime, OnTime](options.htm#M-insertontime) [-insertwidth, insertWidth, InsertWidth](options.htm#M-insertwidth) [-jump, jump, Jump](options.htm#M-jump) [-justify, justify, Justify](options.htm#M-justify) [-orient, orient, Orient](options.htm#M-orient) [-padx, padX, Pad](options.htm#M-padx) [-pady, padY, Pad](options.htm#M-pady) [-relief, relief, Relief](options.htm#M-relief) [-repeatdelay, repeatDelay, RepeatDelay](options.htm#M-repeatdelay) [-repeatinterval, repeatInterval, RepeatInterval](options.htm#M-repeatinterval) [-selectbackground, selectBackground, Foreground](options.htm#M-selectbackground) [-selectborderwidth, selectBorderWidth, BorderWidth](options.htm#M-selectborderwidth) [-selectforeground, selectForeground, Background](options.htm#M-selectforeground) [-setgrid, setGrid, SetGrid](options.htm#M-setgrid) [-takefocus, takeFocus, TakeFocus](options.htm#M-takefocus) [-text, text, Text](options.htm#M-text) [-textvariable, textVariable, Variable](options.htm#M-textvariable) [-troughcolor, troughColor, Background](options.htm#M-troughcolor) [-underline, underline, Underline](options.htm#M-underline) [-wraplength, wrapLength, WrapLength](options.htm#M-wraplength) [-xscrollcommand, xScrollCommand, ScrollCommand](options.htm#M-xscrollcommand) [-yscrollcommand, yScrollCommand, ScrollCommand](options.htm#M-yscrollcommand) [SEE ALSO](options.htm#M4) [KEYWORDS](options.htm#M5) Name ---- options — Standard options supported by widgets Description ----------- This manual entry describes the common configuration options supported by widgets in the Tk toolkit. Every widget does not necessarily support every option (see the manual entries for individual widgets for a list of the standard options supported by that widget), but if a widget does support an option with one of the names listed below, then the option has exactly the effect described below. In the descriptions below, “Command-Line Name” refers to the switch used in class commands and **configure** widget commands to set this value. For example, if an option's command-line switch is **-foreground** and there exists a widget **.a.b.c**, then the command ``` **.a.b.c configure -foreground black** ``` may be used to specify the value **black** for the option in the widget **.a.b.c**. Command-line switches may be abbreviated, as long as the abbreviation is unambiguous. “Database Name” refers to the option's name in the option database (e.g. in .Xdefaults files). “Database Class” refers to the option's class value in the option database. Command-Line Name: **-activebackground** Database Name: **activeBackground** Database Class: **Foreground** Specifies background color to use when drawing active elements. An element (a widget or portion of a widget) is active if the mouse cursor is positioned over the element and pressing a mouse button will cause some action to occur. If strict Motif compliance has been requested by setting the **tk\_strictMotif** variable, this option will normally be ignored; the normal background color will be used instead. For some elements on Windows and Macintosh systems, the active color will only be used while mouse button 1 is pressed over the element. Command-Line Name: **-activeborderwidth** Database Name: **activeBorderWidth** Database Class: **BorderWidth** Specifies a non-negative value indicating the width of the 3-D border drawn around active elements. See above for definition of active elements. The value may have any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**. This option is typically only available in widgets displaying more than one element at a time (e.g. menus but not buttons). Command-Line Name: **-activeforeground** Database Name: **activeForeground** Database Class: **Background** Specifies foreground color to use when drawing active elements. See above for definition of active elements. Command-Line Name: **-anchor** Database Name: **anchor** Database Class: **Anchor** Specifies how the information in a widget (e.g. text or a bitmap) is to be displayed in the widget. Must be one of the values **n**, **ne**, **e**, **se**, **s**, **sw**, **w**, **nw**, or **center**. For example, **nw** means display the information such that its top-left corner is at the top-left corner of the widget. Command-Line Name: **-background or -bg** Database Name: **background** Database Class: **Background** Specifies the normal background color to use when displaying the widget. Command-Line Name: **-bitmap** Database Name: **bitmap** Database Class: **Bitmap** Specifies a bitmap to display in the widget, in any of the forms acceptable to **[Tk\_GetBitmap](https://www.tcl.tk/man/tcl/TkLib/GetBitmap.htm)**. The exact way in which the bitmap is displayed may be affected by other options such as **-anchor** or **-justify**. Typically, if this option is specified then it overrides other options that specify a textual value to display in the widget but this is controlled by the **-compound** option; the **-bitmap** option may be reset to an empty string to re-enable a text display. In widgets that support both **-bitmap** and **-image** options, **-image** will usually override **-bitmap**. Command-Line Name: **-borderwidth or -bd** Database Name: **borderWidth** Database Class: **BorderWidth** Specifies a non-negative value indicating the width of the 3-D border to draw around the outside of the widget (if such a border is being drawn; the **-relief** option typically determines this). The value may also be used when drawing 3-D effects in the interior of the widget. The value may have any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**. Command-Line Name: **-cursor** Database Name: **cursor** Database Class: **Cursor** Specifies the mouse cursor to be used for the widget. The value may have any of the forms acceptable to **[Tk\_GetCursor](https://www.tcl.tk/man/tcl/TkLib/GetCursor.htm)**. In addition, if an empty string is specified, it indicates that the widget should defer to its parent for cursor specification. Command-Line Name: **-compound** Database Name: **compound** Database Class: **Compound** Specifies if the widget should display text and bitmaps/images at the same time, and if so, where the bitmap/image should be placed relative to the text. Must be one of the values **none**, **bottom**, **top**, **left**, **right**, or **center**. For example, the (default) value **none** specifies that the bitmap or image should (if defined) be displayed instead of the text, the value **left** specifies that the bitmap or image should be displayed to the left of the text, and the value **center** specifies that the bitmap or image should be displayed on top of the text. Command-Line Name: **-disabledforeground** Database Name: **disabledForeground** Database Class: **DisabledForeground** Specifies foreground color to use when drawing a disabled element. If the option is specified as an empty string (which is typically the case on monochrome displays), disabled elements are drawn with the normal foreground color but they are dimmed by drawing them with a stippled fill pattern. Command-Line Name: **-exportselection** Database Name: **exportSelection** Database Class: **ExportSelection** Specifies whether or not a selection in the widget should also be the X selection. The value may have any of the forms accepted by **[Tcl\_GetBoolean](https://www.tcl.tk/man/tcl/TclLib/GetInt.htm)**, such as **true**, **false**, **0**, **1**, **yes**, or **no**. If the selection is exported, then selecting in the widget deselects the current X selection, selecting outside the widget deselects any widget selection, and the widget will respond to selection retrieval requests when it has a selection. The default is usually for widgets to export selections. Command-Line Name: **-font** Database Name: **[font](font.htm)** Database Class: **[Font](font.htm)** Specifies the font to use when drawing text inside the widget. The value may have any of the forms described in the **[font](font.htm)** manual page under **[FONT DESCRIPTION](font.htm)**. Command-Line Name: **-foreground or -fg** Database Name: **foreground** Database Class: **Foreground** Specifies the normal foreground color to use when displaying the widget. Command-Line Name: **-highlightbackground** Database Name: **highlightBackground** Database Class: **HighlightBackground** Specifies the color to display in the traversal highlight region when the widget does not have the input focus. Command-Line Name: **-highlightcolor** Database Name: **highlightColor** Database Class: **HighlightColor** Specifies the color to use for the traversal highlight rectangle that is drawn around the widget when it has the input focus. Command-Line Name: **-highlightthickness** Database Name: **highlightThickness** Database Class: **HighlightThickness** Specifies a non-negative value indicating the width of the highlight rectangle to draw around the outside of the widget when it has the input focus. The value may have any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**. If the value is zero, no focus highlight is drawn around the widget. Command-Line Name: **-image** Database Name: **image** Database Class: **Image** Specifies an image to display in the widget, which must have been created with the **[image create](image.htm)** command. Typically, if the **-image** option is specified then it overrides other options that specify a bitmap or textual value to display in the widget, though this is controlled by the **-compound** option; the **-image** option may be reset to an empty string to re-enable a bitmap or text display. Command-Line Name: **-insertbackground** Database Name: **insertBackground** Database Class: **Foreground** Specifies the color to use as background in the area covered by the insertion cursor. This color will normally override either the normal background for the widget (or the selection background if the insertion cursor happens to fall in the selection). Command-Line Name: **-insertborderwidth** Database Name: **insertBorderWidth** Database Class: **BorderWidth** Specifies a non-negative value indicating the width of the 3-D border to draw around the insertion cursor. The value may have any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**. Command-Line Name: **-insertofftime** Database Name: **insertOffTime** Database Class: **OffTime** Specifies a non-negative integer value indicating the number of milliseconds the insertion cursor should remain “off” in each blink cycle. If this option is zero then the cursor does not blink: it is on all the time. Command-Line Name: **-insertontime** Database Name: **insertOnTime** Database Class: **OnTime** Specifies a non-negative integer value indicating the number of milliseconds the insertion cursor should remain “on” in each blink cycle. Command-Line Name: **-insertwidth** Database Name: **insertWidth** Database Class: **InsertWidth** Specifies a value indicating the total width of the insertion cursor. The value may have any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**. If a border has been specified for the insertion cursor (using the **-insertborderwidth** option), the border will be drawn inside the width specified by the **-insertwidth** option. Command-Line Name: **-jump** Database Name: **jump** Database Class: **Jump** For widgets with a slider that can be dragged to adjust a value, such as scrollbars, this option determines when notifications are made about changes in the value. The option's value must be a boolean of the form accepted by **[Tcl\_GetBoolean](https://www.tcl.tk/man/tcl/TclLib/GetInt.htm)**. If the value is false, updates are made continuously as the slider is dragged. If the value is true, updates are delayed until the mouse button is released to end the drag; at that point a single notification is made (the value “jumps” rather than changing smoothly). Command-Line Name: **-justify** Database Name: **justify** Database Class: **Justify** When there are multiple lines of text displayed in a widget, this option determines how the lines line up with each other. Must be one of **left**, **center**, or **right**. **Left** means that the lines' left edges all line up, **center** means that the lines' centers are aligned, and **right** means that the lines' right edges line up. Command-Line Name: **-orient** Database Name: **orient** Database Class: **Orient** For widgets that can lay themselves out with either a horizontal or vertical orientation, such as scrollbars, this option specifies which orientation should be used. Must be either **horizontal** or **vertical** or an abbreviation of one of these. Command-Line Name: **-padx** Database Name: **padX** Database Class: **Pad** Specifies a non-negative value indicating how much extra space to request for the widget in the X-direction. The value may have any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**. When computing how large a window it needs, the widget will add this amount to the width it would normally need (as determined by the width of the things displayed in the widget); if the geometry manager can satisfy this request, the widget will end up with extra internal space to the left and/or right of what it displays inside. Most widgets only use this option for padding text: if they are displaying a bitmap or image, then they usually ignore padding options. Command-Line Name: **-pady** Database Name: **padY** Database Class: **Pad** Specifies a non-negative value indicating how much extra space to request for the widget in the Y-direction. The value may have any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**. When computing how large a window it needs, the widget will add this amount to the height it would normally need (as determined by the height of the things displayed in the widget); if the geometry manager can satisfy this request, the widget will end up with extra internal space above and/or below what it displays inside. Most widgets only use this option for padding text: if they are displaying a bitmap or image, then they usually ignore padding options. Command-Line Name: **-relief** Database Name: **relief** Database Class: **Relief** Specifies the 3-D effect desired for the widget. Acceptable values are **raised**, **sunken**, **flat**, **ridge**, **solid**, and **groove**. The value indicates how the interior of the widget should appear relative to its exterior; for example, **raised** means the interior of the widget should appear to protrude from the screen, relative to the exterior of the widget. Command-Line Name: **-repeatdelay** Database Name: **repeatDelay** Database Class: **RepeatDelay** Specifies the number of milliseconds a button or key must be held down before it begins to auto-repeat. Used, for example, on the up- and down-arrows in scrollbars. Command-Line Name: **-repeatinterval** Database Name: **repeatInterval** Database Class: **RepeatInterval** Used in conjunction with **-repeatdelay**: once auto-repeat begins, this option determines the number of milliseconds between auto-repeats. Command-Line Name: **-selectbackground** Database Name: **selectBackground** Database Class: **Foreground** Specifies the background color to use when displaying selected items. Command-Line Name: **-selectborderwidth** Database Name: **selectBorderWidth** Database Class: **BorderWidth** Specifies a non-negative value indicating the width of the 3-D border to draw around selected items. The value may have any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**. Command-Line Name: **-selectforeground** Database Name: **selectForeground** Database Class: **Background** Specifies the foreground color to use when displaying selected items. Command-Line Name: **-setgrid** Database Name: **setGrid** Database Class: **SetGrid** Specifies a boolean value that determines whether this widget controls the resizing grid for its top-level window. This option is typically used in text widgets, where the information in the widget has a natural size (the size of a character) and it makes sense for the window's dimensions to be integral numbers of these units. These natural window sizes form a grid. If the **-setgrid** option is set to true then the widget will communicate with the window manager so that when the user interactively resizes the top-level window that contains the widget, the dimensions of the window will be displayed to the user in grid units and the window size will be constrained to integral numbers of grid units. See the section **GRIDDED GEOMETRY MANAGEMENT** in the **[wm](wm.htm)** manual entry for more details. Command-Line Name: **-takefocus** Database Name: **takeFocus** Database Class: **TakeFocus** Determines whether the window accepts the focus during keyboard traversal (e.g., Tab and Shift-Tab). Before setting the focus to a window, the traversal scripts consult the value of the **-takefocus** option. A value of **0** means that the window should be skipped entirely during keyboard traversal. **1** means that the window should receive the input focus as long as it is viewable (it and all of its ancestors are mapped). An empty value for the option means that the traversal scripts make the decision about whether or not to focus on the window: the current algorithm is to skip the window if it is disabled, if it has no key bindings, or if it is not viewable. If the value has any other form, then the traversal scripts take the value, append the name of the window to it (with a separator space), and evaluate the resulting string as a Tcl script. The script must return **0**, **1**, or an empty string: a **0** or **1** value specifies whether the window will receive the input focus, and an empty string results in the default decision described above. Note: this interpretation of the option is defined entirely by the Tcl scripts that implement traversal: the widget implementations ignore the option entirely, so you can change its meaning if you redefine the keyboard traversal scripts. Command-Line Name: **-text** Database Name: **[text](text.htm)** Database Class: **[Text](text.htm)** Specifies a string to be displayed inside the widget. The way in which the string is displayed depends on the particular widget and may be determined by other options, such as **-anchor** or **-justify**. Command-Line Name: **-textvariable** Database Name: **textVariable** Database Class: **[Variable](../tclcmd/variable.htm)** Specifies the name of a global variable. The value of the variable is a text string to be displayed inside the widget; if the variable value changes then the widget will automatically update itself to reflect the new value. The way in which the string is displayed in the widget depends on the particular widget and may be determined by other options, such as **-anchor** or **-justify**. Command-Line Name: **-troughcolor** Database Name: **troughColor** Database Class: **Background** Specifies the color to use for the rectangular trough areas in widgets such as scrollbars and scales. This option is ignored for scrollbars on Windows (native widget does not recognize this option). Command-Line Name: **-underline** Database Name: **underline** Database Class: **Underline** Specifies the integer index of a character to underline in the widget. This option is used by the default bindings to implement keyboard traversal for menu buttons and menu entries. 0 corresponds to the first character of the text displayed in the widget, 1 to the next character, and so on. Command-Line Name: **-wraplength** Database Name: **wrapLength** Database Class: **WrapLength** For widgets that can perform word-wrapping, this option specifies the maximum line length. Lines that would exceed this length are wrapped onto the next line, so that no line is longer than the specified length. The value may be specified in any of the standard forms for screen distances. If this value is less than or equal to 0 then no wrapping is done: lines will break only at newline characters in the text. Command-Line Name: **-xscrollcommand** Database Name: **xScrollCommand** Database Class: **ScrollCommand** Specifies the prefix for a command used to communicate with horizontal scrollbars. When the view in the widget's window changes (or whenever anything else occurs that could change the display in a scrollbar, such as a change in the total size of the widget's contents), the widget will generate a Tcl command by concatenating the scroll command and two numbers. Each of the numbers is a fraction between 0 and 1, which indicates a position in the document. 0 indicates the beginning of the document, 1 indicates the end, .333 indicates a position one third the way through the document, and so on. The first fraction indicates the first information in the document that is visible in the window, and the second fraction indicates the information just after the last portion that is visible. The command is then passed to the Tcl interpreter for execution. Typically the **-xscrollcommand** option consists of the path name of a scrollbar widget followed by “set”, e.g. “.x.scrollbar set”: this will cause the scrollbar to be updated whenever the view in the window changes. If this option is not specified, then no command will be executed. Command-Line Name: **-yscrollcommand** Database Name: **yScrollCommand** Database Class: **ScrollCommand** Specifies the prefix for a command used to communicate with vertical scrollbars. This option is treated in the same way as the **-xscrollcommand** option, except that it is used for vertical scrollbars and is provided by widgets that support vertical scrolling. See the description of **-xscrollcommand** for details on how this option is used. See also -------- **[colors](colors.htm)**, **[cursors](cursors.htm)**, **[font](font.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/options.htm>
programming_docs
tcl_tk bindtags bindtags ======== Name ---- bindtags — Determine which bindings apply to a window, and order of evaluation Synopsis -------- **bindtags** *window* ?*tagList*? Description ----------- When a binding is created with the **[bind](bind.htm)** command, it is associated either with a particular window such as **.a.b.c**, a class name such as **[Button](button.htm)**, the keyword **all**, or any other string. All of these forms are called *binding tags*. Each window contains a list of binding tags that determine how events are processed for the window. When an event occurs in a window, it is applied to each of the window's tags in order: for each tag, the most specific binding that matches the given tag and event is executed. See the **[bind](bind.htm)** command for more information on the matching process. By default, each window has four binding tags consisting of the name of the window, the window's class name, the name of the window's nearest toplevel ancestor, and **all**, in that order. Toplevel windows have only three tags by default, since the toplevel name is the same as that of the window. The **bindtags** command allows the binding tags for a window to be read and modified. If **bindtags** is invoked with only one argument, then the current set of binding tags for *window* is returned as a list. If the *tagList* argument is specified to **bindtags**, then it must be a proper list; the tags for *window* are changed to the elements of the list. The elements of *tagList* may be arbitrary strings; however, any tag starting with a dot is treated as the name of a window; if no window by that name exists at the time an event is processed, then the tag is ignored for that event. The order of the elements in *tagList* determines the order in which binding scripts are executed in response to events. For example, the command ``` **bindtags .b {all . Button .b}** ``` reverses the order in which binding scripts will be evaluated for a button named **.b** so that **all** bindings are invoked first, following by bindings for **.b**'s toplevel (“.”), followed by class bindings, followed by bindings for **.b**. If *tagList* is an empty list then the binding tags for *window* are returned to the default state described above. The **bindtags** command may be used to introduce arbitrary additional binding tags for a window, or to remove standard tags. For example, the command ``` **bindtags .b {.b TrickyButton . all}** ``` replaces the **[Button](button.htm)** tag for **.b** with **TrickyButton**. This means that the default widget bindings for buttons, which are associated with the **[Button](button.htm)** tag, will no longer apply to **.b**, but any bindings associated with **TrickyButton** (perhaps some new button behavior) will apply. Example ------- If you have a set of nested **[frame](frame.htm)** widgets and you want events sent to a **[button](button.htm)** widget to also be delivered to all the widgets up to the current **[toplevel](toplevel.htm)** (in contrast to Tk's default behavior, where events are not delivered to those intermediate windows) to make it easier to have accelerators that are only active for part of a window, you could use a helper procedure like this to help set things up: ``` proc setupBindtagsForTreeDelivery {widget} { set tags [list $widget [winfo class $widget]] set w $widget set t [winfo toplevel $w] while {$w ne $t} { set w [winfo parent $w] lappend tags $w } lappend tags all **bindtags** $widget $tags } ``` See also -------- **[bind](bind.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/bindtags.htm> tcl_tk tkerror tkerror ======= Name ---- tkerror — Command invoked to process background errors Synopsis -------- **tkerror** *message* Description ----------- Note: as of Tk 4.1 the **tkerror** command has been renamed to **[bgerror](../tclcmd/bgerror.htm)** because the event loop (which is what usually invokes it) is now part of Tcl. For backward compatibility the **[bgerror](../tclcmd/bgerror.htm)** provided by the current Tk version still tries to call **tkerror** if there is one (or an auto loadable one), so old script defining that error handler should still work, but you should anyhow modify your scripts to use **[bgerror](../tclcmd/bgerror.htm)** instead of **tkerror** because that support for the old name might vanish in the near future. If that call fails, **[bgerror](../tclcmd/bgerror.htm)** posts a dialog showing the error and offering to see the stack trace to the user. If you want your own error management you should directly override **[bgerror](../tclcmd/bgerror.htm)** instead of **tkerror**. Documentation for **[bgerror](../tclcmd/bgerror.htm)** is available as part of Tcl's documentation. Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/tkerror.htm> tcl_tk raise raise ===== Name ---- raise — Change a window's position in the stacking order Synopsis -------- **raise** *window* ?*aboveThis*? Description ----------- If the *aboveThis* argument is omitted then the command raises *window* so that it is above all of its siblings in the stacking order (it will not be obscured by any siblings and will obscure any siblings that overlap it). If *aboveThis* is specified then it must be the path name of a window that is either a sibling of *window* or the descendant of a sibling of *window*. In this case the **raise** command will insert *window* into the stacking order just above *aboveThis* (or the ancestor of *aboveThis* that is a sibling of *window*); this could end up either raising or lowering *window*. All **[toplevel](toplevel.htm)** windows may be restacked with respect to each other, whatever their relative path names, but the window manager is not obligated to strictly honor requests to restack. Example ------- Make a button appear to be in a sibling frame that was created after it. This is is often necessary when building GUIs in the style where you create your activity widgets first before laying them out on the display: ``` button .b -text "Hi there!" pack [frame .f -background blue] pack [label .f.l1 -text "This is above"] pack .b -in .f pack [label .f.l2 -text "This is below"] **raise** .b ``` See also -------- **[lower](lower.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/raise.htm> tcl_tk bind bind ==== [NAME](bind.htm#M2) bind — Arrange for X events to invoke Tcl scripts [SYNOPSIS](bind.htm#M3) [INTRODUCTION](bind.htm#M4) [EVENT PATTERNS](bind.htm#M5) [MODIFIERS](bind.htm#M6) [EVENT TYPES](bind.htm#M7) [**Activate**, **Deactivate**](bind.htm#M8) [**MouseWheel**](bind.htm#M9) [**KeyPress**, **KeyRelease**](bind.htm#M10) [**ButtonPress**, **ButtonRelease**, **Motion**](bind.htm#M11) [**Configure**](bind.htm#M12) [**Map**, **Unmap**](bind.htm#M13) [**Visibility**](bind.htm#M14) [**Expose**](bind.htm#M15) [**Destroy**](bind.htm#M16) [**FocusIn**, **FocusOut**](bind.htm#M17) [**Enter**, **Leave**](bind.htm#M18) [**Property**](bind.htm#M19) [**Colormap**](bind.htm#M20) [**MapRequest**, **CirculateRequest**, **ResizeRequest**, **ConfigureRequest**, **Create**](bind.htm#M21) [**Gravity**, **Reparent**, **Circulate**](bind.htm#M22) [EVENT DETAILS](bind.htm#M23) [BINDING SCRIPTS AND SUBSTITUTIONS](bind.htm#M24) [**%%**](bind.htm#M25) [**%#**](bind.htm#M26) [**%a**](bind.htm#M27) [**%b**](bind.htm#M28) [**%c**](bind.htm#M29) [**%d**](bind.htm#M30) [**%f**](bind.htm#M31) [**%h**](bind.htm#M32) [**%i**](bind.htm#M33) [**%k**](bind.htm#M34) [**%m**](bind.htm#M35) [**%o**](bind.htm#M36) [**%p**](bind.htm#M37) [**%s**](bind.htm#M38) [**%t**](bind.htm#M39) [**%w**](bind.htm#M40) [**%x**, **%y**](bind.htm#M41) [**%A**](bind.htm#M42) [**%B**](bind.htm#M43) [**%D**](bind.htm#M44) [**%E**](bind.htm#M45) [**%K**](bind.htm#M46) [**%M**](bind.htm#M47) [**%N**](bind.htm#M48) [**%P**](bind.htm#M49) [**%R**](bind.htm#M50) [**%S**](bind.htm#M51) [**%T**](bind.htm#M52) [**%W**](bind.htm#M53) [**%X**, **%Y**](bind.htm#M54) [MULTIPLE MATCHES](bind.htm#M55) [MULTI-EVENT SEQUENCES AND IGNORED EVENTS](bind.htm#M56) [ERRORS](bind.htm#M57) [EXAMPLES](bind.htm#M58) [SEE ALSO](bind.htm#M59) [KEYWORDS](bind.htm#M60) Name ---- bind — Arrange for X events to invoke Tcl scripts Synopsis -------- **bind** *tag* ?*sequence*? ?**+**??*script*? Introduction ------------ The **bind** command associates Tcl scripts with X events. If all three arguments are specified, **bind** will arrange for *script* (a Tcl script) to be evaluated whenever the event(s) given by *sequence* occur in the window(s) identified by *tag*. If *script* is prefixed with a “+”, then it is appended to any existing binding for *sequence*; otherwise *script* replaces any existing binding. If *script* is an empty string then the current binding for *sequence* is destroyed, leaving *sequence* unbound. In all of the cases where a *script* argument is provided, **bind** returns an empty string. If *sequence* is specified without a *script*, then the script currently bound to *sequence* is returned, or an empty string is returned if there is no binding for *sequence*. If neither *sequence* nor *script* is specified, then the return value is a list whose elements are all the sequences for which there exist bindings for *tag*. The *tag* argument determines which window(s) the binding applies to. If *tag* begins with a dot, as in **.a.b.c**, then it must be the path name for a window; otherwise it may be an arbitrary string. Each window has an associated list of tags, and a binding applies to a particular window if its tag is among those specified for the window. Although the **[bindtags](bindtags.htm)** command may be used to assign an arbitrary set of binding tags to a window, the default binding tags provide the following behavior: * If a tag is the name of an internal window the binding applies to that window. * If the tag is the name of a toplevel window the binding applies to the toplevel window and all its internal windows. * If the tag is the name of a class of widgets, such as **Button**, the binding applies to all widgets in that class; * If *tag* has the value **all**, the binding applies to all windows in the application. Event patterns -------------- The *sequence* argument specifies a sequence of one or more event patterns, with optional white space between the patterns. Each event pattern may take one of three forms. In the simplest case it is a single printing ASCII character, such as **a** or **[**. The character may not be a space character or the character **<**. This form of pattern matches a **KeyPress** event for the particular character. The second form of pattern is longer but more general. It has the following syntax: ``` **<***modifier-modifier-type-detail***>** ``` The entire event pattern is surrounded by angle brackets. Inside the angle brackets are zero or more modifiers, an event type, and an extra piece of information (*detail*) identifying a particular button or keysym. Any of the fields may be omitted, as long as at least one of *type* and *detail* is present. The fields must be separated by white space or dashes. The third form of pattern is used to specify a user-defined, named virtual event. It has the following syntax: ``` **<<***name***>>** ``` The entire virtual event pattern is surrounded by double angle brackets. Inside the angle brackets is the user-defined name of the virtual event. Modifiers, such as **Shift** or **Control**, may not be combined with a virtual event to modify it. Bindings on a virtual event may be created before the virtual event is defined, and if the definition of a virtual event changes dynamically, all windows bound to that virtual event will respond immediately to the new definition. Some widgets (e.g. **[menu](menu.htm)** and **[text](text.htm)**) issue virtual events when their internal state is updated in some ways. Please see the manual page for each widget for details. ### Modifiers Modifiers consist of any of the following values: | | | | --- | --- | | **Control** | **Mod1**, **M1**, **Command** | | **Alt** | **Mod2**, **M2**, **Option** | | **Shift** | **Mod3**, **M3** | | **Lock** | **Mod4**, **M4** | | **Extended** | **Mod5**, **M5** | | **Button1**, **B1** | **Meta**, **M** | | **Button2**, **B2** | **Double** | | **Button3**, **B3** | **Triple** | | **Button4**, **B4** | **Quadruple** | | **Button5**, **B5** | Where more than one value is listed, separated by commas, the values are equivalent. Most of the modifiers have the obvious X meanings. For example, **Button1** requires that button 1 be depressed when the event occurs. For a binding to match a given event, the modifiers in the event must include all of those specified in the event pattern. An event may also contain additional modifiers not specified in the binding. For example, if button 1 is pressed while the shift and control keys are down, the pattern **<Control-Button-1>** will match the event, but **<Mod1-Button-1>** will not. If no modifiers are specified, then any combination of modifiers may be present in the event. **Meta** and **M** refer to whichever of the **M1** through **M5** modifiers is associated with the Meta key(s) on the keyboard (keysyms **Meta\_R** and **Meta\_L**). If there are no Meta keys, or if they are not associated with any modifiers, then **Meta** and **M** will not match any events. Similarly, the **Alt** modifier refers to whichever modifier is associated with the alt key(s) on the keyboard (keysyms **Alt\_L** and **Alt\_R**). The **Double**, **Triple** and **Quadruple** modifiers are a convenience for specifying double mouse clicks and other repeated events. They cause a particular event pattern to be repeated 2, 3 or 4 times, and also place a time and space requirement on the sequence: for a sequence of events to match a **Double**, **Triple** or **Quadruple** pattern, all of the events must occur close together in time and without substantial mouse motion in between. For example, **<Double-Button-1>** is equivalent to **<Button-1><Button-1>** with the extra time and space requirement. The **Command** and **Option** modifiers are equivalents of **Mod1** resp. **Mod2**, they correspond to Macintosh-specific modifier keys. The **Extended** modifier is, at present, specific to Windows. It appears on events that are associated with the keys on the “extended keyboard”. On a US keyboard, the extended keys include the **Alt** and **Control** keys at the right of the keyboard, the cursor keys in the cluster to the left of the numeric pad, the **NumLock** key, the **[Break](../tclcmd/break.htm)** key, the **PrintScreen** key, and the **/** and **Enter** keys in the numeric keypad. ### Event types The *type* field may be any of the standard X event types, with a few extra abbreviations. The *type* field will also accept a couple non-standard X event types that were added to better support the Macintosh and Windows platforms. Below is a list of all the valid types; where two names appear together, they are synonyms. | | | | | --- | --- | --- | | **Activate** | **Destroy** | **Map** | | **ButtonPress**, **Button** | **Enter** | **MapRequest** | | **ButtonRelease** | **Expose** | **Motion** | | **Circulate** | **FocusIn** | **MouseWheel** | | **CirculateRequest** | **FocusOut** | **Property** | | **Colormap** | **Gravity** | **Reparent** | | **Configure** | **KeyPress**, **Key** | **ResizeRequest** | | **ConfigureRequest** | **KeyRelease** | **Unmap** | | **Create** | **Leave** | **Visibility** | | **Deactivate** | Most of the above events have the same fields and behaviors as events in the X Windowing system. You can find more detailed descriptions of these events in any X window programming book. A couple of the events are extensions to the X event system to support features unique to the Macintosh and Windows platforms. We provide a little more detail on these events here. These include: **Activate**, **Deactivate** These two events are sent to every sub-window of a toplevel when they change state. In addition to the focus Window, the Macintosh platform and Windows platforms have a notion of an active window (which often has but is not required to have the focus). On the Macintosh, widgets in the active window have a different appearance than widgets in deactive windows. The **Activate** event is sent to all the sub-windows in a toplevel when it changes from being deactive to active. Likewise, the **Deactive** event is sent when the window's state changes from active to deactive. There are no useful percent substitutions you would make when binding to these events. **MouseWheel** Many contemporary mice support a mouse wheel, which is used for scrolling documents without using the scrollbars. By rolling the wheel, the system will generate **MouseWheel** events that the application can use to scroll. Like **Key** events the event is always routed to the window that currently has focus. When the event is received you can use the **%D** substitution to get the *delta* field for the event, which is a integer value describing how the mouse wheel has moved. The smallest value for which the system will report is defined by the OS. The sign of the value determines which direction your widget should scroll. Positive values should scroll up and negative values should scroll down. **KeyPress**, **KeyRelease** The **KeyPress** and **KeyRelease** events are generated whenever a key is pressed or released. **KeyPress** and **KeyRelease** events are sent to the window which currently has the keyboard focus. **ButtonPress**, **ButtonRelease**, **Motion** The **ButtonPress** and **ButtonRelease** events are generated when the user presses or releases a mouse button. **Motion** events are generated whenever the pointer is moved. **ButtonPress**, **ButtonRelease**, and **Motion** events are normally sent to the window containing the pointer. When a mouse button is pressed, the window containing the pointer automatically obtains a temporary pointer grab. Subsequent **ButtonPress**, **ButtonRelease**, and **Motion** events will be sent to that window, regardless of which window contains the pointer, until all buttons have been released. **Configure** A **Configure** event is sent to a window whenever its size, position, or border width changes, and sometimes when it has changed position in the stacking order. **Map**, **Unmap** The **Map** and **Unmap** events are generated whenever the mapping state of a window changes. Windows are created in the unmapped state. Top-level windows become mapped when they transition to the **normal** state, and are unmapped in the **withdrawn** and **iconic** states. Other windows become mapped when they are placed under control of a geometry manager (for example **[pack](pack.htm)** or **[grid](grid.htm)**). A window is *viewable* only if it and all of its ancestors are mapped. Note that geometry managers typically do not map their children until they have been mapped themselves, and unmap all children when they become unmapped; hence in Tk **Map** and **Unmap** events indicate whether or not a window is viewable. **Visibility** A window is said to be *obscured* when another window above it in the stacking order fully or partially overlaps it. **Visibility** events are generated whenever a window's obscurity state changes; the *state* field (**%s**) specifies the new state. **Expose** An **Expose** event is generated whenever all or part of a window should be redrawn (for example, when a window is first mapped or if it becomes unobscured). It is normally not necessary for client applications to handle **Expose** events, since Tk handles them internally. **Destroy** A **Destroy** event is delivered to a window when it is destroyed. When the **Destroy** event is delivered to a widget, it is in a “half-dead” state: the widget still exists, but most operations on it will fail. **FocusIn**, **FocusOut** The **FocusIn** and **FocusOut** events are generated whenever the keyboard focus changes. A **FocusOut** event is sent to the old focus window, and a **FocusIn** event is sent to the new one. In addition, if the old and new focus windows do not share a common parent, “virtual crossing” focus events are sent to the intermediate windows in the hierarchy. Thus a **FocusIn** event indicates that the target window or one of its descendants has acquired the focus, and a **FocusOut** event indicates that the focus has been changed to a window outside the target window's hierarchy. The keyboard focus may be changed explicitly by a call to **[focus](focus.htm)**, or implicitly by the window manager. **Enter**, **Leave** An **Enter** event is sent to a window when the pointer enters that window, and a **Leave** event is sent when the pointer leaves it. If there is a pointer grab in effect, **Enter** and **Leave** events are only delivered to the window owning the grab. In addition, when the pointer moves between two windows, **Enter** and **Leave** “virtual crossing” events are sent to intermediate windows in the hierarchy in the same manner as for **FocusIn** and **FocusOut** events. **Property** A **Property** event is sent to a window whenever an X property belonging to that window is changed or deleted. **Property** events are not normally delivered to Tk applications as they are handled by the Tk core. **Colormap** A **Colormap** event is generated whenever the colormap associated with a window has been changed, installed, or uninstalled. Widgets may be assigned a private colormap by specifying a **-colormap** option; the window manager is responsible for installing and uninstalling colormaps as necessary. Note that Tk provides no useful details for this event type. **MapRequest**, **CirculateRequest**, **ResizeRequest**, **ConfigureRequest**, **Create** These events are not normally delivered to Tk applications. They are included for completeness, to make it possible to write X11 window managers in Tk. (These events are only delivered when a client has selected **SubstructureRedirectMask** on a window; the Tk core does not use this mask.) **Gravity**, **Reparent**, **Circulate** The events **Gravity** and **Reparent** are not normally delivered to Tk applications. They are included for completeness. A **Circulate** event indicates that the window has moved to the top or to the bottom of the stacking order as a result of an **XCirculateSubwindows** protocol request. Note that the stacking order may be changed for other reasons which do not generate a **Circulate** event, and that Tk does not use **XCirculateSubwindows()** internally. This event type is included only for completeness; there is no reliable way to track changes to a window's position in the stacking order. ### Event details The last part of a long event specification is *detail*. In the case of a **ButtonPress** or **ButtonRelease** event, it is the number of a button (1-5). If a button number is given, then only an event on that particular button will match; if no button number is given, then an event on any button will match. Note: giving a specific button number is different than specifying a button modifier; in the first case, it refers to a button being pressed or released, while in the second it refers to some other button that is already depressed when the matching event occurs. If a button number is given then *type* may be omitted: if will default to **ButtonPress**. For example, the specifier **<1>** is equivalent to **<ButtonPress-1>**. If the event type is **KeyPress** or **KeyRelease**, then *detail* may be specified in the form of an X keysym. Keysyms are textual specifications for particular keys on the keyboard; they include all the alphanumeric ASCII characters (e.g. “a” is the keysym for the ASCII character “a”), plus descriptions for non-alphanumeric characters (“comma”is the keysym for the comma character), plus descriptions for all the non-ASCII keys on the keyboard (e.g. “Shift\_L” is the keysym for the left shift key, and “F1” is the keysym for the F1 function key, if it exists). The complete list of keysyms is not presented here; it is available in other X documentation and may vary from system to system. If necessary, you can use the **%K** notation described below to print out the keysym name for a particular key. If a keysym *detail* is given, then the *type* field may be omitted; it will default to **KeyPress**. For example, **<Control-comma>** is equivalent to **<Control-KeyPress-comma>**. Binding scripts and substitutions --------------------------------- The *script* argument to **bind** is a Tcl script, which will be executed whenever the given event sequence occurs. *Command* will be executed in the same interpreter that the **bind** command was executed in, and it will run at global level (only global variables will be accessible). If *script* contains any **%** characters, then the script will not be executed directly. Instead, a new script will be generated by replacing each **%**, and the character following it, with information from the current event. The replacement depends on the character following the **%**, as defined in the list below. Unless otherwise indicated, the replacement string is the decimal value of the given field from the current event. Some of the substitutions are only valid for certain types of events; if they are used for other types of events the value substituted is undefined. **%%** Replaced with a single percent. **%#** The number of the last client request processed by the server (the *serial* field from the event). Valid for all event types. **%a** The *above* field from the event, formatted as a hexadecimal number. Valid only for **Configure** events. Indicates the sibling window immediately below the receiving window in the stacking order, or **0** if the receiving window is at the bottom. **%b** The number of the button that was pressed or released. Valid only for **ButtonPress** and **ButtonRelease** events. **%c** The *count* field from the event. Valid only for **Expose** events. Indicates that there are *count* pending **Expose** events which have not yet been delivered to the window. **%d** The *detail* or *user\_data* field from the event. The **%d** is replaced by a string identifying the detail. For **Enter**, **Leave**, **FocusIn**, and **FocusOut** events, the string will be one of the following: | | | | --- | --- | | **NotifyAncestor** | **NotifyNonlinearVirtual** | | **NotifyDetailNone** | **NotifyPointer** | | **NotifyInferior** | **NotifyPointerRoot** | | **NotifyNonlinear** | **NotifyVirtual** | For **ConfigureRequest** events, the string will be one of: | | | | --- | --- | | **Above** | **Opposite** | | **Below** | **None** | | **BottomIf** | **TopIf** | For virtual events, the string will be whatever value is stored in the *user\_data* field when the event was created (typically with **event generate**), or the empty string if the field is NULL. Virtual events corresponding to key sequence presses (see **event add** for details) set the *user\_data* to NULL. For events other than these, the substituted string is undefined. **%f** The *focus* field from the event (**0** or **1**). Valid only for **Enter** and **Leave** events. **1** if the receiving window is the focus window or a descendant of the focus window, **0** otherwise. **%h** The *height* field from the event. Valid for the **Configure**, **ConfigureRequest**, **Create**, **ResizeRequest**, and **Expose** events. Indicates the new or requested height of the window. **%i** The *window* field from the event, represented as a hexadecimal integer. Valid for all event types. **%k** The *keycode* field from the event. Valid only for **KeyPress** and **KeyRelease** events. **%m** The *mode* field from the event. The substituted string is one of **NotifyNormal**, **NotifyGrab**, **NotifyUngrab**, or **NotifyWhileGrabbed**. Valid only for **Enter**, **FocusIn**, **FocusOut**, and **Leave** events. **%o** The *override\_redirect* field from the event. Valid only for **Map**, **Reparent**, and **Configure** events. **%p** The *place* field from the event, substituted as one of the strings **PlaceOnTop** or **PlaceOnBottom**. Valid only for **Circulate** and **CirculateRequest** events. **%s** The *state* field from the event. For **ButtonPress**, **ButtonRelease**, **Enter**, **KeyPress**, **KeyRelease**, **Leave**, and **Motion** events, a decimal string is substituted. For **Visibility**, one of the strings **VisibilityUnobscured**, **VisibilityPartiallyObscured**, and **VisibilityFullyObscured** is substituted. For **Property** events, substituted with either the string **NewValue** (indicating that the property has been created or modified) or **Delete** (indicating that the property has been removed). **%t** The *time* field from the event. This is the X server timestamp (typically the time since the last server reset) in milliseconds, when the event occurred. Valid for most events. **%w** The *width* field from the event. Indicates the new or requested width of the window. Valid only for **Configure**, **ConfigureRequest**, **Create**, **ResizeRequest**, and **Expose** events. **%x**, **%y** The *x* and *y* fields from the event. For **ButtonPress**, **ButtonRelease**, **Motion**, **KeyPress**, **KeyRelease**, and **MouseWheel** events, **%x** and **%y** indicate the position of the mouse pointer relative to the receiving window. For **Enter** and **Leave** events, the position where the mouse pointer crossed the window, relative to the receiving window. For **Configure** and **Create** requests, the *x* and *y* coordinates of the window relative to its parent window. **%A** Substitutes the UNICODE character corresponding to the event, or the empty string if the event does not correspond to a UNICODE character (e.g. the shift key was pressed). **XmbLookupString** (or **XLookupString** when input method support is turned off) does all the work of translating from the event to a UNICODE character. Valid only for **KeyPress** and **KeyRelease** events. **%B** The *border\_width* field from the event. Valid only for **Configure**, **ConfigureRequest**, and **Create** events. **%D** This reports the *delta* value of a **MouseWheel** event. The *delta* value represents the rotation units the mouse wheel has been moved. The sign of the value represents the direction the mouse wheel was scrolled. **%E** The *send\_event* field from the event. Valid for all event types. **0** indicates that this is a “normal” event, **1** indicates that it is a “synthetic” event generated by **SendEvent**. **%K** The keysym corresponding to the event, substituted as a textual string. Valid only for **KeyPress** and **KeyRelease** events. **%M** The number of script-based binding patterns matched so far for the event. Valid for all event types. **%N** The keysym corresponding to the event, substituted as a decimal number. Valid only for **KeyPress** and **KeyRelease** events. **%P** The name of the property being updated or deleted (which may be converted to an XAtom using **[winfo atom](winfo.htm)**.) Valid only for **Property** events. **%R** The *root* window identifier from the event. Valid only for events containing a *root* field. **%S** The *subwindow* window identifier from the event, formatted as a hexadecimal number. Valid only for events containing a *subwindow* field. **%T** The *type* field from the event. Valid for all event types. **%W** The path name of the window to which the event was reported (the *window* field from the event). Valid for all event types. **%X**, **%Y** The *x\_root* and *y\_root* fields from the event. If a virtual-root window manager is being used then the substituted values are the corresponding x-coordinate and y-coordinate in the virtual root. Valid only for **ButtonPress**, **ButtonRelease**, **KeyPress**, **KeyRelease**, and **Motion** events. Same meaning as **%x** and **%y**, except relative to the (virtual) root window. The replacement string for a %-replacement is formatted as a proper Tcl list element. This means that spaces or special characters such as **$** and **{** may be preceded by backslashes. This guarantees that the string will be passed through the Tcl parser when the binding script is evaluated. Most replacements are numbers or well-defined strings such as **Above**; for these replacements no special formatting is ever necessary. The most common case where reformatting occurs is for the **%A** substitution. For example, if *script* is ``` **insert %A** ``` and the character typed is an open square bracket, then the script actually executed will be ``` **insert \[** ``` This will cause the **insert** to receive the original replacement string (open square bracket) as its first argument. If the extra backslash had not been added, Tcl would not have been able to parse the script correctly. Multiple matches ---------------- It is possible for several bindings to match a given X event. If the bindings are associated with different *tag*'s, then each of the bindings will be executed, in order. By default, a binding for the widget will be executed first, followed by a class binding, a binding for its toplevel, and an **all** binding. The **[bindtags](bindtags.htm)** command may be used to change this order for a particular window or to associate additional binding tags with the window. The **[continue](../tclcmd/continue.htm)** and **[break](../tclcmd/break.htm)** commands may be used inside a binding script to control the processing of matching scripts. If **[continue](../tclcmd/continue.htm)** is invoked, then the current binding script is terminated but Tk will continue processing binding scripts associated with other *tag*'s. If the **[break](../tclcmd/break.htm)** command is invoked within a binding script, then that script terminates and no other scripts will be invoked for the event. If more than one binding matches a particular event and they have the same *tag*, then the most specific binding is chosen and its script is evaluated. The following tests are applied, in order, to determine which of several matching sequences is more specific: 1. an event pattern that specifies a specific button or key is more specific than one that does not; 2. a longer sequence (in terms of number of events matched) is more specific than a shorter sequence; 3. if the modifiers specified in one pattern are a subset of the modifiers in another pattern, then the pattern with more modifiers is more specific. 4. a virtual event whose physical pattern matches the sequence is less specific than the same physical pattern that is not associated with a virtual event. 5. given a sequence that matches two or more virtual events, one of the virtual events will be chosen, but the order is undefined. If the matching sequences contain more than one event, then tests (c)-(e) are applied in order from the most recent event to the least recent event in the sequences. If these tests fail to determine a winner, then the most recently registered sequence is the winner. If there are two (or more) virtual events that are both triggered by the same sequence, and both of those virtual events are bound to the same window tag, then only one of the virtual events will be triggered, and it will be picked at random: ``` event add <<Paste>> <Control-y> event add <<Paste>> <Button-2> event add <<Scroll>> <Button-2> **bind** Entry <<Paste>> {puts Paste} **bind** Entry <<Scroll>> {puts Scroll} ``` If the user types Control-y, the **<<Paste>>** binding will be invoked, but if the user presses button 2 then one of either the **<<Paste>>** or the **<<Scroll>>** bindings will be invoked, but exactly which one gets invoked is undefined. If an X event does not match any of the existing bindings, then the event is ignored. An unbound event is not considered to be an error. Multi-event sequences and ignored events ---------------------------------------- When a *sequence* specified in a **bind** command contains more than one event pattern, then its script is executed whenever the recent events (leading up to and including the current event) match the given sequence. This means, for example, that if button 1 is clicked repeatedly the sequence **<Double-ButtonPress-1>** will match each button press but the first. If extraneous events that would prevent a match occur in the middle of an event sequence then the extraneous events are ignored unless they are **KeyPress** or **ButtonPress** events. For example, **<Double-ButtonPress-1>** will match a sequence of presses of button 1, even though there will be **ButtonRelease** events (and possibly **Motion** events) between the **ButtonPress** events. Furthermore, a **KeyPress** event may be preceded by any number of other **KeyPress** events for modifier keys without the modifier keys preventing a match. For example, the event sequence **aB** will match a press of the **a** key, a release of the **a** key, a press of the **Shift** key, and a press of the **b** key: the press of **Shift** is ignored because it is a modifier key. Finally, if several **Motion** events occur in a row, only the last one is used for purposes of matching binding sequences. Errors ------ If an error occurs in executing the script for a binding then the **[bgerror](../tclcmd/bgerror.htm)** mechanism is used to report the error. The **[bgerror](../tclcmd/bgerror.htm)** command will be executed at global level (outside the context of any Tcl procedure). Examples -------- Arrange for a string describing the motion of the mouse to be printed out when the mouse is double-clicked: ``` **bind** . <Double-1> { puts "hi from (%x,%y)" } ``` A little GUI that displays what the keysym name of the last key pressed is: ``` set keysym "Press any key" pack [label .l -textvariable keysym -padx 2m -pady 1m] **bind** . <Key> { set keysym "You pressed %K" } ``` See also -------- **[bgerror](../tclcmd/bgerror.htm)**, **[bindtags](bindtags.htm)**, **[event](event.htm)**, **[focus](focus.htm)**, **[grab](grab.htm)**, **[keysyms](keysyms.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/bind.htm>
programming_docs
tcl_tk chooseDirectory chooseDirectory =============== [NAME](choosedirectory.htm#M2) tk\_chooseDirectory — pops up a dialog box for the user to select a directory. [SYNOPSIS](choosedirectory.htm#M3) [DESCRIPTION](choosedirectory.htm#M4) [**-initialdir** *dirname*](choosedirectory.htm#M5) [**-mustexist** *boolean*](choosedirectory.htm#M6) [**-parent** *window*](choosedirectory.htm#M7) [**-title** *titleString*](choosedirectory.htm#M8) [EXAMPLE](choosedirectory.htm#M9) [SEE ALSO](choosedirectory.htm#M10) [KEYWORDS](choosedirectory.htm#M11) Name ---- tk\_chooseDirectory — pops up a dialog box for the user to select a directory. Synopsis -------- **tk\_chooseDirectory** ?*option value ...*? Description ----------- The procedure **tk\_chooseDirectory** pops up a dialog box for the user to select a directory. The following *option-value* pairs are possible as command line arguments: **-initialdir** *dirname* Specifies that the directories in *directory* should be displayed when the dialog pops up. If this parameter is not specified, the initial directory defaults to the current working directory on non-Windows systems and on Windows systems prior to Vista. On Vista and later systems, the initial directory defaults to the last user-selected directory for the application. If the parameter specifies a relative path, the return value will convert the relative path to an absolute path. **-mustexist** *boolean* Specifies whether the user may specify non-existent directories. If this parameter is true, then the user may only select directories that already exist. The default value is *false*. **-parent** *window* Makes *window* the logical parent of the dialog. The dialog is displayed on top of its parent window. On Mac OS X, this turns the file dialog into a sheet attached to the parent window. **-title** *titleString* Specifies a string to display as the title of the dialog box. If this option is not specified, then a default title will be displayed. Example ------- ``` set dir [**tk\_chooseDirectory** \ -initialdir ~ -title "Choose a directory"] if {$dir eq ""} { label .l -text "No directory selected" } else { label .l -text "Selected $dir" } ``` See also -------- **tk\_getOpenFile**, **tk\_getSaveFile** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/chooseDirectory.htm> tcl_tk option option ====== [NAME](option.htm#M2) option — Add/retrieve window options to/from the option database [SYNOPSIS](option.htm#M3) [DESCRIPTION](option.htm#M4) [**widgetDefault**](option.htm#M5) [**startupFile**](option.htm#M6) [**userDefault**](option.htm#M7) [**interactive**](option.htm#M8) [PATTERN FORMAT](option.htm#M9) [EXAMPLES](option.htm#M10) [SEE ALSO](option.htm#M11) [KEYWORDS](option.htm#M12) Name ---- option — Add/retrieve window options to/from the option database Synopsis -------- **option add** *pattern value* ?*priority*? **option clear** **option get** *window name class* **option readfile** *fileName* ?*priority*? Description ----------- The **option** command allows you to add entries to the Tk option database or to retrieve options from the database. The **add** form of the command adds a new option to the database. *Pattern* contains the option being specified, and consists of names and/or classes separated by asterisks or dots, in the usual X format (see **[PATTERN FORMAT](#M9)**). *Value* contains a text string to associate with *pattern*; this is the value that will be returned in calls to **[Tk\_GetOption](https://www.tcl.tk/man/tcl/TkLib/GetOption.htm)** or by invocations of the **option get** command. If *priority* is specified, it indicates the priority level for this option (see below for legal values); it defaults to **interactive**. This command always returns an empty string. The **option clear** command clears the option database. Default options (from the **RESOURCE\_MANAGER** property or the **.Xdefaults** file) will be reloaded automatically the next time an option is added to the database or removed from it. This command always returns an empty string. The **option get** command returns the value of the option specified for *window* under *name* and *class*. If several entries in the option database match *window*, *name*, and *class*, then the command returns whichever was created with highest *priority* level. If there are several matching entries at the same priority level, then it returns whichever entry was most recently entered into the option database. If there are no matching entries, then the empty string is returned. The **readfile** form of the command reads *fileName*, which should have the standard format for an X resource database such as **.Xdefaults**, and adds all the options specified in that file to the option database. If *priority* is specified, it indicates the priority level at which to enter the options; *priority* defaults to **interactive**. The file is read through a channel which is in "utf-8" encoding, invalid byte sequences are automatically converted to valid ones. This means that encodings like ISO 8859-1 or cp1252 with high probability will work as well, but this cannot be guaranteed. This cannot be changed, setting the [encoding system] has no effect. The *priority* arguments to the **option** command are normally specified symbolically using one of the following values: **widgetDefault** Level 20. Used for default values hard-coded into widgets. **startupFile** Level 40. Used for options specified in application-specific startup files. **userDefault** Level 60. Used for options specified in user-specific defaults files, such as **.Xdefaults**, resource databases loaded into the X server, or user-specific startup files. **interactive** Level 80. Used for options specified interactively after the application starts running. If *priority* is not specified, it defaults to this level. Any of the above keywords may be abbreviated. In addition, priorities may be specified numerically using integers between 0 and 100, inclusive. The numeric form is probably a bad idea except for new priority levels other than the ones given above. Pattern format -------------- Patterns consist of a sequence of words separated by either periods, “.”, or asterisks “\*”. The overall pattern may also be optionally preceded by an asterisk. Each word in the pattern conventionally starts with either an upper-case letter (in which case it denotes the class of either a widget or an option) or any other character, when it denotes the name of a widget or option. The last word in the pattern always indicates the option; the preceding ones constrain which widgets that option will be looked for in. When two words are separated by a period, the latter widget must be a direct child of the former (or the option must apply to only the indicated widgets). When two words are separated by an asterisk, any depth of widgets may lie between the former and latter widgets (and the option applies to all widgets that are children of the former widget). If the overall pattern is preceded by an asterisk, then the overall pattern applies anywhere it can throughout the whole widget hierarchy. Otherwise the first word of the pattern is matched against the name and class of the “**.**” **[toplevel](toplevel.htm)**, which are usually set by options to **[wish](../usercmd/wish.htm)**. Examples -------- Instruct every button in the application to have red text on it unless explicitly overridden, by setting the **foreground** for the **[Button](button.htm)** class (note that on some platforms the option is ignored): ``` **option add** *Button.foreground red startupFile ``` Allow users to control what happens in an entry widget when the Return key is pressed by specifying a script in the option database and add a default option for that which rings the bell: ``` entry .e bind .e <Return> [**option get** .e returnCommand Command] **option add** *.e.returnCommand bell widgetDefault ``` See also -------- **[options](options.htm)**, **[wish](../usercmd/wish.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/option.htm> tcl_tk bitmap bitmap ====== [NAME](bitmap.htm#M2) bitmap — Images that display two colors [SYNOPSIS](bitmap.htm#M3) [DESCRIPTION](bitmap.htm#M4) [CREATING BITMAPS](bitmap.htm#M5) [**-background** *color*](bitmap.htm#M6) [**-data** *string*](bitmap.htm#M7) [**-file** *name*](bitmap.htm#M8) [**-foreground** *color*](bitmap.htm#M9) [**-maskdata** *string*](bitmap.htm#M10) [**-maskfile** *name*](bitmap.htm#M11) [IMAGE COMMAND](bitmap.htm#M12) [*imageName* **cget** *option*](bitmap.htm#M13) [*imageName* **configure** ?*option*? ?*value option value ...*?](bitmap.htm#M14) [KEYWORDS](bitmap.htm#M15) Name ---- bitmap — Images that display two colors Synopsis -------- **image create bitmap** ?*name*? ?*options*? *imageName* **cget** *option* *imageName* **configure** ?*option*? ?*value option value ...*? Description ----------- A bitmap is an image whose pixels can display either of two colors or be transparent. A bitmap image is defined by four things: a background color, a foreground color, and two bitmaps, called the *source* and the *mask*. Each of the bitmaps specifies 0/1 values for a rectangular array of pixels, and the two bitmaps must have the same dimensions. For pixels where the mask is zero, the image displays nothing, producing a transparent effect. For other pixels, the image displays the foreground color if the source data is one and the background color if the source data is zero. Creating bitmaps ---------------- Like all images, bitmaps are created using the **[image create](image.htm)** command. Bitmaps support the following *options*: **-background** *color* Specifies a background color for the image in any of the standard ways accepted by Tk. If this option is set to an empty string then the background pixels will be transparent. This effect is achieved by using the source bitmap as the mask bitmap, ignoring any **-maskdata** or **-maskfile** options. **-data** *string* Specifies the contents of the source bitmap as a string. The string must adhere to X11 bitmap format (e.g., as generated by the **bitmap** program). If both the **-data** and **-file** options are specified, the **-data** option takes precedence. **-file** *name* *name* gives the name of a file whose contents define the source bitmap. The file must adhere to X11 bitmap format (e.g., as generated by the **bitmap** program). **-foreground** *color* Specifies a foreground color for the image in any of the standard ways accepted by Tk. **-maskdata** *string* Specifies the contents of the mask as a string. The string must adhere to X11 bitmap format (e.g., as generated by the **bitmap** program). If both the **-maskdata** and **-maskfile** options are specified, the **-maskdata** option takes precedence. **-maskfile** *name* *name* gives the name of a file whose contents define the mask. The file must adhere to X11 bitmap format (e.g., as generated by the **bitmap** program). Image command ------------- When a bitmap image is created, Tk also creates a new command whose name is the same as the image. This command may be used to invoke various operations on the image. It has the following general form: ``` *imageName option* ?*arg arg ...*? ``` *Option* and the *arg*s determine the exact behavior of the command. The following commands are possible for bitmap images: *imageName* **cget** *option* Returns the current value of the configuration option given by *option*. *Option* may have any of the values accepted by the **[image create](image.htm)** **bitmap** command. *imageName* **configure** ?*option*? ?*value option value ...*? Query or modify the configuration options for the image. If no *option* is specified, returns a list describing all of the available options for *imageName* (see **[Tk\_ConfigureInfo](https://www.tcl.tk/man/tcl/TkLib/ConfigWidg.htm)** for information on the format of this list). If *option* is specified with no *value*, then the command returns a list describing the one named option (this list will be identical to the corresponding sublist of the value returned if no *option* is specified). If one or more *option-value* pairs are specified, then the command modifies the given option(s) to have the given value(s); in this case the command returns an empty string. *Option* may have any of the values accepted by the **[image create](image.htm)** **bitmap** command. Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/bitmap.htm> tcl_tk ttk_sizegrip ttk\_sizegrip ============= [NAME](ttk_sizegrip.htm#M2) ttk::sizegrip — Bottom-right corner resize widget [SYNOPSIS](ttk_sizegrip.htm#M3) [DESCRIPTION](ttk_sizegrip.htm#M4) [STANDARD OPTIONS](ttk_sizegrip.htm#M5) [-class, undefined, undefined](ttk_widget.htm#M-class) [-cursor, cursor, Cursor](ttk_widget.htm#M-cursor) [-state, state, State](ttk_widget.htm#M-state) [-style, style, Style](ttk_widget.htm#M-style) [-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus) [WIDGET COMMAND](ttk_sizegrip.htm#M6) [PLATFORM-SPECIFIC NOTES](ttk_sizegrip.htm#M7) [EXAMPLES](ttk_sizegrip.htm#M8) [BUGS](ttk_sizegrip.htm#M9) [SEE ALSO](ttk_sizegrip.htm#M10) [KEYWORDS](ttk_sizegrip.htm#M11) Name ---- ttk::sizegrip — Bottom-right corner resize widget Synopsis -------- **ttk::sizegrip** *pathName* ?*options*? Description ----------- A **ttk::sizegrip** widget (also known as a *grow box*) allows the user to resize the containing toplevel window by pressing and dragging the grip. Standard options ---------------- **[-class, undefined, undefined](ttk_widget.htm#M-class)** **[-cursor, cursor, Cursor](ttk_widget.htm#M-cursor)** **[-state, state, State](ttk_widget.htm#M-state)** **[-style, style, Style](ttk_widget.htm#M-style)** **[-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus)** Widget command -------------- Sizegrip widgets support the standard **cget**, **configure**, **identify**, **instate**, and **state** methods. No other widget methods are used. Platform-specific notes ----------------------- On Mac OSX, toplevel windows automatically include a built-in size grip by default. Adding a **ttk::sizegrip** there is harmless, since the built-in grip will just mask the widget. Examples -------- Using pack: ``` pack [ttk::frame $top.statusbar] -side bottom -fill x pack [**ttk::sizegrip** $top.statusbar.grip] -side right -anchor se ``` Using grid: ``` grid [**ttk::sizegrip** $top.statusbar.grip] \ -row $lastRow -column $lastColumn -sticky se # ... optional: add vertical scrollbar in $lastColumn, # ... optional: add horizontal scrollbar in $lastRow ``` Bugs ---- If the containing toplevel's position was specified relative to the right or bottom of the screen (e.g., “**wm geometry ...** *w***x***h***-***x***-***y*” instead of “**wm geometry ...** *w***x***h***+***x***+***y*”), the sizegrip widget will not resize the window. **ttk::sizegrip** widgets only support “southeast” resizing. See also -------- **[ttk::widget](ttk_widget.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/ttk_sizegrip.htm> tcl_tk ttk_checkbutton ttk\_checkbutton ================ [NAME](ttk_checkbutton.htm#M2) ttk::checkbutton — On/off widget [SYNOPSIS](ttk_checkbutton.htm#M3) [DESCRIPTION](ttk_checkbutton.htm#M4) [STANDARD OPTIONS](ttk_checkbutton.htm#M5) [-class, undefined, undefined](ttk_widget.htm#M-class) [-compound, compound, Compound](ttk_widget.htm#M-compound) [-cursor, cursor, Cursor](ttk_widget.htm#M-cursor) [-image, image, Image](ttk_widget.htm#M-image) [-state, state, State](ttk_widget.htm#M-state) [-style, style, Style](ttk_widget.htm#M-style) [-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus) [-text, text, Text](ttk_widget.htm#M-text) [-textvariable, textVariable, Variable](ttk_widget.htm#M-textvariable) [-underline, underline, Underline](ttk_widget.htm#M-underline) [-width, width, Width](ttk_widget.htm#M-width) [WIDGET-SPECIFIC OPTIONS](ttk_checkbutton.htm#M6) [-command, command, Command](ttk_checkbutton.htm#M7) [-offvalue, offValue, OffValue](ttk_checkbutton.htm#M8) [-onvalue, onValue, OnValue](ttk_checkbutton.htm#M9) [-variable, variable, Variable](ttk_checkbutton.htm#M10) [WIDGET COMMAND](ttk_checkbutton.htm#M11) [*pathname* **invoke**](ttk_checkbutton.htm#M12) [WIDGET STATES](ttk_checkbutton.htm#M13) [STANDARD STYLES](ttk_checkbutton.htm#M14) [SEE ALSO](ttk_checkbutton.htm#M15) [KEYWORDS](ttk_checkbutton.htm#M16) Name ---- ttk::checkbutton — On/off widget Synopsis -------- **ttk::checkbutton** *pathName* ?*options*? Description ----------- A **ttk::checkbutton** widget is used to show or change a setting. It has two states, selected and deselected. The state of the checkbutton may be linked to a Tcl variable. Standard options ---------------- **[-class, undefined, undefined](ttk_widget.htm#M-class)** **[-compound, compound, Compound](ttk_widget.htm#M-compound)** **[-cursor, cursor, Cursor](ttk_widget.htm#M-cursor)** **[-image, image, Image](ttk_widget.htm#M-image)** **[-state, state, State](ttk_widget.htm#M-state)** **[-style, style, Style](ttk_widget.htm#M-style)** **[-takefocus, takeFocus, TakeFocus](ttk_widget.htm#M-takefocus)** **[-text, text, Text](ttk_widget.htm#M-text)** **[-textvariable, textVariable, Variable](ttk_widget.htm#M-textvariable)** **[-underline, underline, Underline](ttk_widget.htm#M-underline)** **[-width, width, Width](ttk_widget.htm#M-width)** Widget-specific options ----------------------- Command-Line Name: **-command** Database Name: **command** Database Class: **Command** A Tcl script to execute whenever the widget is invoked. Command-Line Name: **-offvalue** Database Name: **offValue** Database Class: **OffValue** The value to store in the associated **-variable** when the widget is deselected. Defaults to **0**. Command-Line Name: **-onvalue** Database Name: **onValue** Database Class: **OnValue** The value to store in the associated **-variable** when the widget is selected. Defaults to **1**. Command-Line Name: **-variable** Database Name: **variable** Database Class: **Variable** The name of a global variable whose value is linked to the widget. Defaults to the widget pathname if not specified. Widget command -------------- In addition to the standard **cget**, **configure**, **identify**, **instate**, and **state** commands, checkbuttons support the following additional widget commands: *pathname* **invoke** Toggles between the selected and deselected states and evaluates the associated **-command**. If the widget is currently selected, sets the **-variable** to the **-offvalue** and deselects the widget; otherwise, sets the **-variable** to the **-onvalue** Returns the result of the **-command**. Widget states ------------- The widget does not respond to user input if the **disabled** state is set. The widget sets the **selected** state whenever the linked **-variable** is set to the widget's **-onvalue**, and clears it otherwise. The widget sets the **alternate** state whenever the linked **-variable** is unset. (The **alternate** state may be used to indicate a “tri-state” or “indeterminate” selection.) Standard styles --------------- **Ttk::checkbutton** widgets support the **Toolbutton** style in all standard themes, which is useful for creating widgets for toolbars. See also -------- **[ttk::widget](ttk_widget.htm)**, **[ttk::radiobutton](ttk_radiobutton.htm)**, **[checkbutton](checkbutton.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/ttk_checkbutton.htm>
programming_docs
tcl_tk ttk_vsapi ttk\_vsapi ========== [NAME](ttk_vsapi.htm#M2) ttk\_vsapi — Define a Microsoft Visual Styles element [SYNOPSIS](ttk_vsapi.htm#M3) [DESCRIPTION](ttk_vsapi.htm#M4) [OPTIONS](ttk_vsapi.htm#M5) [**-padding** *padding*](ttk_vsapi.htm#M6) [**-margins** *padding*](ttk_vsapi.htm#M7) [**-width** *width*](ttk_vsapi.htm#M8) [**-height** *height*](ttk_vsapi.htm#M9) [STATE MAP](ttk_vsapi.htm#M10) [EXAMPLE](ttk_vsapi.htm#M11) [SEE ALSO](ttk_vsapi.htm#M12) [KEYWORDS](ttk_vsapi.htm#M13) Name ---- ttk\_vsapi — Define a Microsoft Visual Styles element Synopsis -------- **ttk::style element create** *name* **vsapi** *className* *partId* ?*stateMap*? ?*options*? Description ----------- The **vsapi** element factory creates a new element in the current theme whose visual appearance is drawn using the Microsoft Visual Styles API which is responsible for the themed styles on Windows XP and Vista. This factory permits any of the Visual Styles parts to be declared as Ttk elements that can then be included in a style layout to modify the appearance of Ttk widgets. *className* and *partId* are required parameters and specify the Visual Styles class and part as given in the Microsoft documentation. The *stateMap* may be provided to map Ttk states to Visual Styles API states (see **[STATE MAP](#M10)**). Options ------- Valid *options* are: **-padding** *padding* Specify the element's interior padding. *padding* is a list of up to four integers specifying the left, top, right and bottom padding quantities respectively. This option may not be mixed with any other options. **-margins** *padding* Specifies the elements exterior padding. *padding* is a list of up to four integers specifying the left, top, right and bottom padding quantities respectively. This option may not be mixed with any other options. **-width** *width* Specifies the height for the element. If this option is set then the Visual Styles API will not be queried for the recommended size or the part. If this option is set then **-height** should also be set. The **-width** and **-height** options cannot be mixed with the **-padding** or **-margins** options. **-height** *height* Specifies the height of the element. See the comments for **-width**. State map --------- The *stateMap* parameter is a list of ttk states and the corresponding Visual Styles API state value. This permits the element appearance to respond to changes in the widget state such as becoming active or being pressed. The list should be as described for the **[ttk::style map](ttk_style.htm)** command but note that the last pair in the list should be the default state and is typically and empty list and 1. Unfortunately all the Visual Styles parts have different state values and these must be looked up either in the Microsoft documentation or more likely in the header files. The original header to use was *tmschema.h*, but in more recent versions of the Windows Development Kit this is *vssym32.h*. If no *stateMap* parameter is given there is an implicit default map of {{} 1} Example ------- Create a correctly themed close button by changing the layout of a **[ttk::button](ttk_button.htm)**(n). This uses the WINDOW part WP\_SMALLCLOSEBUTTON and as documented the states CBS\_DISABLED, CBS\_HOT, CBS\_NORMAL and CBS\_PUSHED are mapped from ttk states. ``` ttk::style element create smallclose **vsapi** WINDOW 19 \ {disabled 4 pressed 3 active 2 {} 1} ttk::style layout CloseButton {CloseButton.smallclose -sticky news} pack [ttk::button .close -style CloseButton] ``` Change the appearance of a **[ttk::checkbutton](ttk_checkbutton.htm)**(n) to use the Explorer pin part EBP\_HEADERPIN. ``` ttk::style element create pin **vsapi** EXPLORERBAR 3 { {pressed !selected} 3 {active !selected} 2 {pressed selected} 6 {active selected} 5 {selected} 4 {} 1 } ttk::style layout Explorer.Pin {Explorer.Pin.pin -sticky news} pack [ttk::checkbutton .pin -style Explorer.Pin] ``` See also -------- **[ttk::intro](ttk_intro.htm)**, **[ttk::widget](ttk_widget.htm)**, **[ttk::style](ttk_style.htm)**, **[ttk\_image](ttk_image.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/ttk_vsapi.htm> tcl_tk winfo winfo ===== [NAME](winfo.htm#M2) winfo — Return window-related information [SYNOPSIS](winfo.htm#M3) [DESCRIPTION](winfo.htm#M4) [**winfo atom** ?**-displayof** *window*? *name*](winfo.htm#M5) [**winfo atomname** ?**-displayof** *window*? *id*](winfo.htm#M6) [**winfo cells** *window*](winfo.htm#M7) [**winfo children** *window*](winfo.htm#M8) [**winfo class** *window*](winfo.htm#M9) [**winfo colormapfull** *window*](winfo.htm#M10) [**winfo containing** ?**-displayof** *window*? *rootX rootY*](winfo.htm#M11) [**winfo depth** *window*](winfo.htm#M12) [**winfo exists** *window*](winfo.htm#M13) [**winfo fpixels** *window* *number*](winfo.htm#M14) [**winfo geometry** *window*](winfo.htm#M15) [**winfo height** *window*](winfo.htm#M16) [**winfo id** *window*](winfo.htm#M17) [**winfo interps** ?**-displayof** *window*?](winfo.htm#M18) [**winfo ismapped** *window*](winfo.htm#M19) [**winfo manager** *window*](winfo.htm#M20) [**winfo name** *window*](winfo.htm#M21) [**winfo parent** *window*](winfo.htm#M22) [**winfo pathname** ?**-displayof** *window*? *id*](winfo.htm#M23) [**winfo pixels** *window* *number*](winfo.htm#M24) [**winfo pointerx** *window*](winfo.htm#M25) [**winfo pointerxy** *window*](winfo.htm#M26) [**winfo pointery** *window*](winfo.htm#M27) [**winfo reqheight** *window*](winfo.htm#M28) [**winfo reqwidth** *window*](winfo.htm#M29) [**winfo rgb** *window color*](winfo.htm#M30) [**winfo rootx** *window*](winfo.htm#M31) [**winfo rooty** *window*](winfo.htm#M32) [**winfo screen** *window*](winfo.htm#M33) [**winfo screencells** *window*](winfo.htm#M34) [**winfo screendepth** *window*](winfo.htm#M35) [**winfo screenheight** *window*](winfo.htm#M36) [**winfo screenmmheight** *window*](winfo.htm#M37) [**winfo screenmmwidth** *window*](winfo.htm#M38) [**winfo screenvisual** *window*](winfo.htm#M39) [**winfo screenwidth** *window*](winfo.htm#M40) [**winfo server** *window*](winfo.htm#M41) [**winfo toplevel** *window*](winfo.htm#M42) [**winfo viewable** *window*](winfo.htm#M43) [**winfo visual** *window*](winfo.htm#M44) [**winfo visualid** *window*](winfo.htm#M45) [**winfo visualsavailable** *window* ?**includeids**?](winfo.htm#M46) [**winfo vrootheight** *window*](winfo.htm#M47) [**winfo vrootwidth** *window*](winfo.htm#M48) [**winfo vrootx** *window*](winfo.htm#M49) [**winfo vrooty** *window*](winfo.htm#M50) [**winfo width** *window*](winfo.htm#M51) [**winfo x** *window*](winfo.htm#M52) [**winfo y** *window*](winfo.htm#M53) [EXAMPLE](winfo.htm#M54) [KEYWORDS](winfo.htm#M55) Name ---- winfo — Return window-related information Synopsis -------- **winfo** *option* ?*arg arg ...*? Description ----------- The **winfo** command is used to retrieve information about windows managed by Tk. It can take any of a number of different forms, depending on the *option* argument. The legal forms are: **winfo atom** ?**-displayof** *window*? *name* Returns a decimal string giving the integer identifier for the atom whose name is *name*. If no atom exists with the name *name* then a new one is created. If the **-displayof** option is given then the atom is looked up on the display of *window*; otherwise it is looked up on the display of the application's main window. **winfo atomname** ?**-displayof** *window*? *id* Returns the textual name for the atom whose integer identifier is *id*. If the **-displayof** option is given then the identifier is looked up on the display of *window*; otherwise it is looked up on the display of the application's main window. This command is the inverse of the **winfo atom** command. It generates an error if no such atom exists. **winfo cells** *window* Returns a decimal string giving the number of cells in the color map for *window*. **winfo children** *window* Returns a list containing the path names of all the children of *window*. Top-level windows are returned as children of their logical parents. The list is in stacking order, with the lowest window first, except for Top-level windows which are not returned in stacking order. Use the **[wm stackorder](wm.htm)** command to query the stacking order of Top-level windows. **winfo class** *window* Returns the class name for *window*. **winfo colormapfull** *window* Returns 1 if the colormap for *window* is known to be full, 0 otherwise. The colormap for a window is “known” to be full if the last attempt to allocate a new color on that window failed and this application has not freed any colors in the colormap since the failed allocation. **winfo containing** ?**-displayof** *window*? *rootX rootY* Returns the path name for the window containing the point given by *rootX* and *rootY*. *RootX* and *rootY* are specified in screen units (i.e. any form acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**) in the coordinate system of the root window (if a virtual-root window manager is in use then the coordinate system of the virtual root window is used). If the **-displayof** option is given then the coordinates refer to the screen containing *window*; otherwise they refer to the screen of the application's main window. If no window in this application contains the point then an empty string is returned. In selecting the containing window, children are given higher priority than parents and among siblings the highest one in the stacking order is chosen. **winfo depth** *window* Returns a decimal string giving the depth of *window* (number of bits per pixel). **winfo exists** *window* Returns 1 if there exists a window named *window*, 0 if no such window exists. **winfo fpixels** *window* *number* Returns a floating-point value giving the number of pixels in *window* corresponding to the distance given by *number*. *Number* may be specified in any of the forms acceptable to **[Tk\_GetScreenMM](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**, such as “2.0c” or “1i”. The return value may be fractional; for an integer value, use **winfo pixels**. **winfo geometry** *window* Returns the geometry for *window*, in the form *width***x***height***+***x***+***y*. All dimensions are in pixels. **winfo height** *window* Returns a decimal string giving *window*'s height in pixels. When a window is first created its height will be 1 pixel; the height will eventually be changed by a geometry manager to fulfil the window's needs. If you need the true height immediately after creating a widget, invoke **[update](../tclcmd/update.htm)** to force the geometry manager to arrange it, or use **winfo reqheight** to get the window's requested height instead of its actual height. **winfo id** *window* Returns a hexadecimal string giving a low-level platform-specific identifier for *window*. On Unix platforms, this is the X window identifier. Under Windows, this is the Windows HWND. On the Macintosh the value has no meaning outside Tk. **winfo interps** ?**-displayof** *window*? Returns a list whose members are the names of all Tcl interpreters (e.g. all Tk-based applications) currently registered for a particular display. If the **-displayof** option is given then the return value refers to the display of *window*; otherwise it refers to the display of the application's main window. **winfo ismapped** *window* Returns **1** if *window* is currently mapped, **0** otherwise. **winfo manager** *window* Returns the name of the geometry manager currently responsible for *window*, or an empty string if *window* is not managed by any geometry manager. The name is usually the name of the Tcl command for the geometry manager, such as **[pack](pack.htm)** or **[place](place.htm)**. If the geometry manager is a widget, such as canvases or text, the name is the widget's class command, such as **[canvas](canvas.htm)**. **winfo name** *window* Returns *window*'s name (i.e. its name within its parent, as opposed to its full path name). The command **winfo name .** will return the name of the application. **winfo parent** *window* Returns the path name of *window*'s parent, or an empty string if *window* is the main window of the application. **winfo pathname** ?**-displayof** *window*? *id* Returns the path name of the window whose X identifier is *id*. *Id* must be a decimal, hexadecimal, or octal integer and must correspond to a window in the invoking application. If the **-displayof** option is given then the identifier is looked up on the display of *window*; otherwise it is looked up on the display of the application's main window. **winfo pixels** *window* *number* Returns the number of pixels in *window* corresponding to the distance given by *number*. *Number* may be specified in any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**, such as “2.0c” or “1i”. The result is rounded to the nearest integer value; for a fractional result, use **winfo fpixels**. **winfo pointerx** *window* If the mouse pointer is on the same screen as *window*, returns the pointer's x coordinate, measured in pixels in the screen's root window. If a virtual root window is in use on the screen, the position is measured in the virtual root. If the mouse pointer is not on the same screen as *window* then -1 is returned. **winfo pointerxy** *window* If the mouse pointer is on the same screen as *window*, returns a list with two elements, which are the pointer's x and y coordinates measured in pixels in the screen's root window. If a virtual root window is in use on the screen, the position is computed in the virtual root. If the mouse pointer is not on the same screen as *window* then both of the returned coordinates are -1. **winfo pointery** *window* If the mouse pointer is on the same screen as *window*, returns the pointer's y coordinate, measured in pixels in the screen's root window. If a virtual root window is in use on the screen, the position is computed in the virtual root. If the mouse pointer is not on the same screen as *window* then -1 is returned. **winfo reqheight** *window* Returns a decimal string giving *window*'s requested height, in pixels. This is the value used by *window*'s geometry manager to compute its geometry. **winfo reqwidth** *window* Returns a decimal string giving *window*'s requested width, in pixels. This is the value used by *window*'s geometry manager to compute its geometry. **winfo rgb** *window color* Returns a list containing three decimal values in the range 0 to 65535, which are the red, green, and blue intensities that correspond to *color* in the window given by *window*. *Color* may be specified in any of the forms acceptable for a color option. **winfo rootx** *window* Returns a decimal string giving the x-coordinate, in the root window of the screen, of the upper-left corner of *window*'s border (or *window* if it has no border). **winfo rooty** *window* Returns a decimal string giving the y-coordinate, in the root window of the screen, of the upper-left corner of *window*'s border (or *window* if it has no border). **winfo screen** *window* Returns the name of the screen associated with *window*, in the form *displayName*.*screenIndex*. **winfo screencells** *window* Returns a decimal string giving the number of cells in the default color map for *window*'s screen. **winfo screendepth** *window* Returns a decimal string giving the depth of the root window of *window*'s screen (number of bits per pixel). **winfo screenheight** *window* Returns a decimal string giving the height of *window*'s screen, in pixels. **winfo screenmmheight** *window* Returns a decimal string giving the height of *window*'s screen, in millimeters. **winfo screenmmwidth** *window* Returns a decimal string giving the width of *window*'s screen, in millimeters. **winfo screenvisual** *window* Returns one of the following strings to indicate the default visual class for *window*'s screen: **directcolor**, **grayscale**, **pseudocolor**, **staticcolor**, **staticgray**, or **truecolor**. **winfo screenwidth** *window* Returns a decimal string giving the width of *window*'s screen, in pixels. **winfo server** *window* Returns a string containing information about the server for *window*'s display. The exact format of this string may vary from platform to platform. For X servers the string has the form “**X***major***R***minor vendor vendorVersion*” where *major* and *minor* are the version and revision numbers provided by the server (e.g., **X11R5**), *vendor* is the name of the vendor for the server, and *vendorRelease* is an integer release number provided by the server. **winfo toplevel** *window* Returns the path name of the top-of-hierarchy window containing *window*. In standard Tk this will always be a **[toplevel](toplevel.htm)** widget, but extensions may create other kinds of top-of-hierarchy widgets. **winfo viewable** *window* Returns 1 if *window* and all of its ancestors up through the nearest toplevel window are mapped. Returns 0 if any of these windows are not mapped. **winfo visual** *window* Returns one of the following strings to indicate the visual class for *window*: **directcolor**, **grayscale**, **pseudocolor**, **staticcolor**, **staticgray**, or **truecolor**. **winfo visualid** *window* Returns the X identifier for the visual for *window*. **winfo visualsavailable** *window* ?**includeids**? Returns a list whose elements describe the visuals available for *window*'s screen. Each element consists of a visual class followed by an integer depth. The class has the same form as returned by **winfo visual**. The depth gives the number of bits per pixel in the visual. In addition, if the **includeids** argument is provided, then the depth is followed by the X identifier for the visual. **winfo vrootheight** *window* Returns the height of the virtual root window associated with *window* if there is one; otherwise returns the height of *window*'s screen. **winfo vrootwidth** *window* Returns the width of the virtual root window associated with *window* if there is one; otherwise returns the width of *window*'s screen. **winfo vrootx** *window* Returns the x-offset of the virtual root window associated with *window*, relative to the root window of its screen. This is normally either zero or negative. Returns 0 if there is no virtual root window for *window*. **winfo vrooty** *window* Returns the y-offset of the virtual root window associated with *window*, relative to the root window of its screen. This is normally either zero or negative. Returns 0 if there is no virtual root window for *window*. **winfo width** *window* Returns a decimal string giving *window*'s width in pixels. When a window is first created its width will be 1 pixel; the width will eventually be changed by a geometry manager to fulfil the window's needs. If you need the true width immediately after creating a widget, invoke **[update](../tclcmd/update.htm)** to force the geometry manager to arrange it, or use **winfo reqwidth** to get the window's requested width instead of its actual width. **winfo x** *window* Returns a decimal string giving the x-coordinate, in *window*'s parent, of the upper-left corner of *window*'s border (or *window* if it has no border). **winfo y** *window* Returns a decimal string giving the y-coordinate, in *window*'s parent, of the upper-left corner of *window*'s border (or *window* if it has no border). Example ------- Print where the mouse pointer is and what window it is currently over: ``` lassign [**winfo pointerxy** .] x y puts -nonewline "Mouse pointer at ($x,$y) which is " set win [**winfo containing** $x $y] if {$win eq ""} { puts "over no window" } else { puts "over $win" } ``` Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/winfo.htm>
programming_docs
tcl_tk destroy destroy ======= Name ---- destroy — Destroy one or more windows Synopsis -------- **destroy** ?*window window ...*? Description ----------- This command deletes the windows given by the *window* arguments, plus all of their descendants. If a *window* “.” is deleted then all windows will be destroyed and the application will (normally) exit. The *window*s are destroyed in order, and if an error occurs in destroying a window the command aborts without destroying the remaining windows. No error is returned if *window* does not exist. Example ------- Destroy all checkbuttons that are direct children of the given widget: ``` proc killCheckbuttonChildren {parent} { foreach w [winfo children $parent] { if {[winfo class $w] eq "Checkbutton"} { **destroy** $w } } } ``` Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/destroy.htm> tcl_tk menu menu ==== [NAME](menu.htm#M2) menu, tk\_menuSetFocus — Create and manipulate 'menu' widgets and menubars [SYNOPSIS](menu.htm#M3) [STANDARD OPTIONS](menu.htm#M4) [-activebackground, activeBackground, Foreground](options.htm#M-activebackground) [-activeborderwidth, activeBorderWidth, BorderWidth](options.htm#M-activeborderwidth) [-activeforeground, activeForeground, Background](options.htm#M-activeforeground) [-background or -bg, background, Background](options.htm#M-background) [-borderwidth or -bd, borderWidth, BorderWidth](options.htm#M-borderwidth) [-cursor, cursor, Cursor](options.htm#M-cursor) [-disabledforeground, disabledForeground, DisabledForeground](options.htm#M-disabledforeground) [-font, font, Font](options.htm#M-font) [-foreground or -fg, foreground, Foreground](options.htm#M-foreground) [-relief, relief, Relief](options.htm#M-relief) [-takefocus, takeFocus, TakeFocus](options.htm#M-takefocus) [WIDGET-SPECIFIC OPTIONS](menu.htm#M5) [-postcommand, postCommand, Command](menu.htm#M6) [-selectcolor, selectColor, Background](menu.htm#M7) [-tearoff, tearOff, TearOff](menu.htm#M8) [-tearoffcommand, tearOffCommand, TearOffCommand](menu.htm#M9) [-title, title, Title](menu.htm#M10) [-type, type, Type](menu.htm#M11) [INTRODUCTION](menu.htm#M12) [TYPES OF ENTRIES](menu.htm#M13) [COMMAND ENTRIES](menu.htm#M14) [SEPARATOR ENTRIES](menu.htm#M15) [CHECKBUTTON ENTRIES](menu.htm#M16) [RADIOBUTTON ENTRIES](menu.htm#M17) [CASCADE ENTRIES](menu.htm#M18) [TEAR-OFF ENTRIES](menu.htm#M19) [MENUBARS](menu.htm#M20) [SPECIAL MENUS IN MENUBARS](menu.htm#M21) [CLONES](menu.htm#M22) [WIDGET COMMAND](menu.htm#M23) [**active**](menu.htm#M24) [**end**](menu.htm#M25) [**last**](menu.htm#M26) [**none**](menu.htm#M27) [**@***number*](menu.htm#M28) [*number*](menu.htm#M29) [*pattern*](menu.htm#M30) [*pathName* **activate** *index*](menu.htm#M31) [*pathName* **add** *type* ?*option value option value ...*?](menu.htm#M32) [*pathName* **cget** *option*](menu.htm#M33) [*pathName* **clone** *newPathname* ?*cloneType*?](menu.htm#M34) [*pathName* **configure** ?*option*? ?*value option value ...*?](menu.htm#M35) [*pathName* **delete** *index1* ?*index2*?](menu.htm#M36) [*pathName* **entrycget** *index option*](menu.htm#M37) [*pathName* **entryconfigure** *index* ?*options...*?](menu.htm#M38) [*pathName* **index** *index*](menu.htm#M39) [*pathName* **insert** *index type* ?*option value option value ...*?](menu.htm#M40) [*pathName* **invoke** *index*](menu.htm#M41) [*pathName* **post** *x y*](menu.htm#M42) [*pathName* **postcascade** *index*](menu.htm#M43) [*pathName* **type** *index*](menu.htm#M44) [*pathName* **unpost**](menu.htm#M45) [*pathName* **xposition** *index*](menu.htm#M46) [*pathName* **yposition** *index*](menu.htm#M47) [MENU ENTRY OPTIONS](menu.htm#M48) [**-activebackground** *value*](menu.htm#M49) [**-activeforeground** *value*](menu.htm#M50) [**-accelerator** *value*](menu.htm#M51) [**-background** *value*](menu.htm#M52) [**-bitmap** *value*](menu.htm#M53) [**-columnbreak** *value*](menu.htm#M54) [**-command** *value*](menu.htm#M55) [**-compound** *value*](menu.htm#M56) [**-font** *value*](menu.htm#M57) [**-foreground** *value*](menu.htm#M58) [**-hidemargin** *value*](menu.htm#M59) [**-image** *value*](menu.htm#M60) [**-indicatoron** *value*](menu.htm#M61) [**-label** *value*](menu.htm#M62) [**-menu** *value*](menu.htm#M63) [**-offvalue** *value*](menu.htm#M64) [**-onvalue** *value*](menu.htm#M65) [**-selectcolor** *value*](menu.htm#M66) [**-selectimage** *value*](menu.htm#M67) [**-state** *value*](menu.htm#M68) [**-underline** *value*](menu.htm#M69) [**-value** *value*](menu.htm#M70) [**-variable** *value*](menu.htm#M71) [MENU CONFIGURATIONS](menu.htm#M72) [**Pulldown Menus in Menubar**](menu.htm#M73) [**Pulldown Menus in Menu Buttons**](menu.htm#M74) [**Popup Menus**](menu.htm#M75) [**Option Menus**](menu.htm#M76) [**Torn-off Menus**](menu.htm#M77) [DEFAULT BINDINGS](menu.htm#M78) [BUGS](menu.htm#M79) [SEE ALSO](menu.htm#M80) [KEYWORDS](menu.htm#M81) Name ---- menu, tk\_menuSetFocus — Create and manipulate 'menu' widgets and menubars Synopsis -------- **menu** *pathName* ?*options*? **tk\_menuSetFocus** *pathName* Standard options ---------------- **[-activebackground, activeBackground, Foreground](options.htm#M-activebackground)** **[-activeborderwidth, activeBorderWidth, BorderWidth](options.htm#M-activeborderwidth)** **[-activeforeground, activeForeground, Background](options.htm#M-activeforeground)** **[-background or -bg, background, Background](options.htm#M-background)** **[-borderwidth or -bd, borderWidth, BorderWidth](options.htm#M-borderwidth)** **[-cursor, cursor, Cursor](options.htm#M-cursor)** **[-disabledforeground, disabledForeground, DisabledForeground](options.htm#M-disabledforeground)** **[-font, font, Font](options.htm#M-font)** **[-foreground or -fg, foreground, Foreground](options.htm#M-foreground)** **[-relief, relief, Relief](options.htm#M-relief)** **[-takefocus, takeFocus, TakeFocus](options.htm#M-takefocus)** Widget-specific options ----------------------- Command-Line Name: **-postcommand** Database Name: **postCommand** Database Class: **Command** If this option is specified then it provides a Tcl command to execute each time the menu is posted. The command is invoked by the **post** widget command before posting the menu. Note that in Tk 8.0 on Macintosh and Windows, all post-commands in a system of menus are executed before any of those menus are posted. This is due to the limitations in the individual platforms' menu managers. Command-Line Name: **-selectcolor** Database Name: **selectColor** Database Class: **Background** For menu entries that are check buttons or radio buttons, this option specifies the color to display in the indicator when the check button or radio button is selected. Command-Line Name: **-tearoff** Database Name: **tearOff** Database Class: **TearOff** This option must have a proper boolean value, which specifies whether or not the menu should include a tear-off entry at the top. If so, it will exist as entry 0 of the menu and the other entries will number starting at 1. The default menu bindings arrange for the menu to be torn off when the tear-off entry is invoked. This option is ignored under Aqua/Mac OS X, where menus cannot be torn off. Command-Line Name: **-tearoffcommand** Database Name: **tearOffCommand** Database Class: **TearOffCommand** If this option has a non-empty value, then it specifies a Tcl command to invoke whenever the menu is torn off. The actual command will consist of the value of this option, followed by a space, followed by the name of the menu window, followed by a space, followed by the name of the name of the torn off menu window. For example, if the option's value is “**a b**” and menu **.x.y** is torn off to create a new menu **.x.tearoff1**, then the command “**a b .x.y .x.tearoff1**” will be invoked. This option is ignored under Aqua/Mac OS X, where menus cannot be torn off. Command-Line Name: **-title** Database Name: **title** Database Class: **Title** The string will be used to title the window created when this menu is torn off. If the title is NULL, then the window will have the title of the menubutton or the text of the cascade item from which this menu was invoked. Command-Line Name: **-type** Database Name: **type** Database Class: **Type** This option can be one of **menubar**, **tearoff**, or **normal**, and is set when the menu is created. While the string returned by the configuration database will change if this option is changed, this does not affect the menu widget's behavior. This is used by the cloning mechanism and is not normally set outside of the Tk library. Introduction ------------ The **menu** command creates a new top-level window (given by the *pathName* argument) and makes it into a menu widget. That menu widget can either be used as a pop-up window or applied to a **[toplevel](toplevel.htm)** (with its **-menu** option) to make it into the menubar for that toplevel. Additional options, described above, may be specified on the command line or in the option database to configure aspects of the menu such as its colors and font. The **menu** command returns its *pathName* argument. At the time this command is invoked, there must not exist a window named *pathName*, but *pathName*'s parent must exist. A menu is a widget that displays a collection of one-line entries arranged in one or more columns. There exist several different types of entries, each with different properties. Entries of different types may be combined in a single menu. Menu entries are not the same as entry widgets. In fact, menu entries are not even distinct widgets; the entire menu is one widget. Menu entries are displayed with up to three separate fields. The main field is a label in the form of a text string, a bitmap, or an image, controlled by the **-label**, **-bitmap**, and **-image** options for the entry. If the **-accelerator** option is specified for an entry then a second textual field is displayed to the right of the label. The accelerator typically describes a keystroke sequence that may be typed in the application to cause the same result as invoking the menu entry. The third field is an *indicator*. The indicator is present only for checkbutton or radiobutton entries. It indicates whether the entry is selected or not, and is displayed to the left of the entry's string. In normal use, an entry becomes active (displays itself differently) whenever the mouse pointer is over the entry. If a mouse button is released over the entry then the entry is *invoked*. The effect of invocation is different for each type of entry; these effects are described below in the sections on individual entries. Entries may be *disabled*, which causes their labels and accelerators to be displayed with dimmer colors. The default menu bindings will not allow a disabled entry to be activated or invoked. Disabled entries may be re-enabled, at which point it becomes possible to activate and invoke them again. Whenever a menu's active entry is changed, a <<MenuSelect>> virtual event is send to the menu. The active item can then be queried from the menu, and an action can be taken, such as setting context-sensitive help text for the entry. Types of entries ---------------- ### Command entries The most common kind of menu entry is a command entry, which behaves much like a button widget. When a command entry is invoked, a Tcl command is executed. The Tcl command is specified with the **-command** option. ### Separator entries A separator is an entry that is displayed as a horizontal dividing line. A separator may not be activated or invoked, and it has no behavior other than its display appearance. ### Checkbutton entries A checkbutton menu entry behaves much like a checkbutton widget. When it is invoked it toggles back and forth between the selected and deselected states. When the entry is selected, a particular value is stored in a particular global variable (as determined by the **-onvalue** and **-variable** options for the entry); when the entry is deselected another value (determined by the **-offvalue** option) is stored in the global variable. An indicator box is displayed to the left of the label in a checkbutton entry. If the entry is selected then the indicator's center is displayed in the color given by the **-selectcolor** option for the entry; otherwise the indicator's center is displayed in the background color for the menu. If a **-command** option is specified for a checkbutton entry, then its value is evaluated as a Tcl command each time the entry is invoked; this happens after toggling the entry's selected state. ### Radiobutton entries A radiobutton menu entry behaves much like a radiobutton widget. Radiobutton entries are organized in groups of which only one entry may be selected at a time. Whenever a particular entry becomes selected it stores a particular value into a particular global variable (as determined by the **-value** and **-variable** options for the entry). This action causes any previously-selected entry in the same group to deselect itself. Once an entry has become selected, any change to the entry's associated variable will cause the entry to deselect itself. Grouping of radiobutton entries is determined by their associated variables: if two entries have the same associated variable then they are in the same group. An indicator diamond is displayed to the left of the label in each radiobutton entry. If the entry is selected then the indicator's center is displayed in the color given by the **-selectcolor** option for the entry; otherwise the indicator's center is displayed in the background color for the menu. If a **-command** option is specified for a radiobutton entry, then its value is evaluated as a Tcl command each time the entry is invoked; this happens after selecting the entry. ### Cascade entries A cascade entry is one with an associated menu (determined by the **-menu** option). Cascade entries allow the construction of cascading menus. The **postcascade** widget command can be used to post and unpost the associated menu just next to of the cascade entry. The associated menu must be a child of the menu containing the cascade entry (this is needed in order for menu traversal to work correctly). A cascade entry posts its associated menu by invoking a Tcl command of the form ``` *menu* **post** *x y* ``` where *menu* is the path name of the associated menu, and *x* and *y* are the root-window coordinates of the upper-right corner of the cascade entry. On Unix, the lower-level menu is unposted by executing a Tcl command with the form ``` *menu* **unpost** ``` where *menu* is the name of the associated menu. On other platforms, the platform's native code takes care of unposting the menu. If a **-command** option is specified for a cascade entry then it is evaluated as a Tcl command whenever the entry is invoked. This is not supported on Windows. ### Tear-off entries A tear-off entry appears at the top of the menu if enabled with the **-tearoff** option. It is not like other menu entries in that it cannot be created with the **add** widget command and cannot be deleted with the **delete** widget command. When a tear-off entry is created it appears as a dashed line at the top of the menu. Under the default bindings, invoking the tear-off entry causes a torn-off copy to be made of the menu and all of its submenus. Menubars -------- Any menu can be set as a menubar for a toplevel window (see **[toplevel](toplevel.htm)** command for syntax). On the Macintosh, whenever the toplevel is in front, this menu's cascade items will appear in the menubar across the top of the main monitor. On Windows and Unix, this menu's items will be displayed in a menubar across the top of the window. These menus will behave according to the interface guidelines of their platforms. For every menu set as a menubar, a clone menu is made. See the **[CLONES](#M22)** section for more information. As noted, menubars may behave differently on different platforms. One example of this concerns the handling of checkbuttons and radiobuttons within the menu. While it is permitted to put these menu elements on menubars, they may not be drawn with indicators on some platforms, due to system restrictions. ### Special menus in menubars Certain menus in a menubar will be treated specially. On the Macintosh, access to the special Application, Window and Help menus is provided. On Windows, access to the Windows System menu in each window is provided. On X Windows, a special right-justified help menu may be provided if Motif menu compatibility is enabled. In all cases, these menus must be created with the command name of the menubar menu concatenated with the special name. So for a menubar named .menubar, on the Macintosh, the special menus would be .menubar.apple, .menubar.window and .menubar.help; on Windows, the special menu would be .menubar.system; on X Windows, the help menu would be .menubar.help. When Tk sees a .menubar.apple menu as the first menu in a menubar on the Macintosh, that menu's contents make up the first items of the Application menu whenever the window containing the menubar is in front. After all of the Tk-defined items, the menu will have a separator, followed by all standard Application menu items. Such a .apple menu must be present in a menu when that menu is first configured as a toplevel's menubar, otherwise a default application menu (hidden from Tk) will be inserted into the menubar at that time and subsequent addition of a .apple menu will no longer result in it becoming the Application menu. When Tk sees a .menubar.window menu on the Macintosh, the menu's contents are inserted into the standard Window menu of the user's menubar whenever the window's menubar is in front. The first items in the menu are provided by Mac OS X, and the names of the current toplevels are automatically appended after all the Tk-defined items and a separator. When Tk sees a .menubar.help menu on the Macintosh, the menu's contents are appended to the standard Help menu of the user's menubar whenever the window's menubar is in front. The first items in the menu are provided by Mac OS X. When Tk sees a System menu on Windows, its items are appended to the system menu that the menubar is attached to. This menu is tied to the application icon and can be invoked with the mouse or by typing Alt+Spacebar. Due to limitations in the Windows API, any font changes, colors, images, bitmaps, or tearoff images will not appear in the system menu. When Tk sees a Help menu on X Windows and Motif menu compatibility is enabled the menu is moved to be last in the menubar and is right justified. Motif menu compatibility is enabled by setting the Tk option **\*Menu.useMotifHelp** to true or by calling **tk::classic::restore menu**. Clones ------ When a menu is set as a menubar for a toplevel window, or when a menu is torn off, a clone of the menu is made. This clone is a menu widget in its own right, but it is a child of the original. Changes in the configuration of the original are reflected in the clone. Additionally, any cascades that are pointed to are also cloned so that menu traversal will work right. Clones are destroyed when either the tearoff or menubar goes away, or when the original menu is destroyed. Widget command -------------- The **menu** command creates a new Tcl command whose name is *pathName*. This command may be used to invoke various operations on the widget. It has the following general form: ``` *pathName option* ?*arg arg ...*? ``` *Option* and the *arg*s determine the exact behavior of the command. Many of the widget commands for a menu take as one argument an indicator of which entry of the menu to operate on. These indicators are called *index*es and may be specified in any of the following forms: **active** Indicates the entry that is currently active. If no entry is active then this form is equivalent to **none**. This form may not be abbreviated. **end** Indicates the bottommost entry in the menu. If there are no entries in the menu then this form is equivalent to **none**. This form may not be abbreviated. **last** Same as **end**. **none** Indicates “no entry at all”; this is used most commonly with the **activate** option to deactivate all the entries in the menu. In most cases the specification of **none** causes nothing to happen in the widget command. This form may not be abbreviated. **@***number* In this form, *number* is treated as a y-coordinate in the menu's window; the entry closest to that y-coordinate is used. For example, “**@0**” indicates the top-most entry in the window. *number* Specifies the entry numerically, where 0 corresponds to the top-most entry of the menu, 1 to the entry below it, and so on. *pattern* If the index does not satisfy one of the above forms then this form is used. *Pattern* is pattern-matched against the label of each entry in the menu, in order from the top down, until a matching entry is found. The rules of **[string match](../tclcmd/string.htm)** are used. If the index could match more than one of the above forms, then the form earlier in the above list takes precedence. The following widget commands are possible for menu widgets: *pathName* **activate** *index* Change the state of the entry indicated by *index* to **active** and redisplay it using its active colors. Any previously-active entry is deactivated. If *index* is specified as **none**, or if the specified entry is disabled, then the menu ends up with no active entry. Returns an empty string. *pathName* **add** *type* ?*option value option value ...*? Add a new entry to the bottom of the menu. The new entry's type is given by *type* and must be one of **cascade**, **checkbutton**, **command**, **radiobutton**, or **separator**, or a unique abbreviation of one of the above. If additional arguments are present, they specify the options listed in the **[MENU ENTRY OPTIONS](#M48)** section below. The **add** widget command returns an empty string. *pathName* **cget** *option* Returns the current value of the configuration option given by *option*. *Option* may have any of the values accepted by the **menu** command. *pathName* **clone** *newPathname* ?*cloneType*? Makes a clone of the current menu named *newPathName*. This clone is a menu in its own right, but any changes to the clone are propagated to the original menu and vice versa. *cloneType* can be **normal**, **menubar**, or **tearoff**. Should not normally be called outside of the Tk library. See the **[CLONES](#M22)** section for more information. *pathName* **configure** ?*option*? ?*value option value ...*? Query or modify the configuration options of the widget. If no *option* is specified, returns a list describing all of the available options for *pathName* (see **[Tk\_ConfigureInfo](https://www.tcl.tk/man/tcl/TkLib/ConfigWidg.htm)** for information on the format of this list). If *option* is specified with no *value*, then the command returns a list describing the one named option (this list will be identical to the corresponding sublist of the value returned if no *option* is specified). If one or more *option-value* pairs are specified, then the command modifies the given widget option(s) to have the given value(s); in this case the command returns an empty string. *Option* may have any of the values accepted by the **menu** command. *pathName* **delete** *index1* ?*index2*? Delete all of the menu entries between *index1* and *index2* inclusive. If *index2* is omitted then it defaults to *index1*. Attempts to delete a tear-off menu entry are ignored (instead, you should change the **-tearoff** option to remove the tear-off entry). *pathName* **entrycget** *index option* Returns the current value of a configuration option for the entry given by *index*. *Option* may have any of the names described in the **[MENU ENTRY OPTIONS](#M48)** section below. *pathName* **entryconfigure** *index* ?*options...*? This command is similar to the **configure** command, except that it applies to the options for an individual entry, whereas **configure** applies to the options for the menu as a whole. *Options* may have any of the values described in the **[MENU ENTRY OPTIONS](#M48)** section below. If *options* are specified, options are modified as indicated in the command and the command returns an empty string. If no *options* are specified, returns a list describing the current options for entry *index* (see **[Tk\_ConfigureInfo](https://www.tcl.tk/man/tcl/TkLib/ConfigWidg.htm)** for information on the format of this list). *pathName* **index** *index* Returns the numerical index corresponding to *index*, or **none** if *index* was specified as **none**. *pathName* **insert** *index type* ?*option value option value ...*? Same as the **add** widget command except that it inserts the new entry just before the entry given by *index*, instead of appending to the end of the menu. The *type*, *option*, and *value* arguments have the same interpretation as for the **add** widget command. It is not possible to insert new menu entries before the tear-off entry, if the menu has one. *pathName* **invoke** *index* Invoke the action of the menu entry. See the sections on the individual entries above for details on what happens. If the menu entry is disabled then nothing happens. If the entry has a command associated with it then the result of that command is returned as the result of the **invoke** widget command. Otherwise the result is an empty string. Note: invoking a menu entry does not automatically unpost the menu; the default bindings normally take care of this before invoking the **invoke** widget command. *pathName* **post** *x y* Arrange for the menu to be displayed on the screen at the root-window coordinates given by *x* and *y*. These coordinates are adjusted if necessary to guarantee that the entire menu is visible on the screen. This command normally returns an empty string. If the **-postcommand** option has been specified, then its value is executed as a Tcl script before posting the menu and the result of that script is returned as the result of the **post** widget command. If an error returns while executing the command, then the error is returned without posting the menu. *pathName* **postcascade** *index* Posts the submenu associated with the cascade entry given by *index*, and unposts any previously posted submenu. If *index* does not correspond to a cascade entry, or if *pathName* is not posted, the command has no effect except to unpost any currently posted submenu. *pathName* **type** *index* Returns the type of the menu entry given by *index*. This is the *type* argument passed to the **add** or **insert** widget command when the entry was created, such as **command** or **separator**, or **tearoff** for a tear-off entry. *pathName* **unpost** Unmap the window so that it is no longer displayed. If a lower-level cascaded menu is posted, unpost that menu. Returns an empty string. This subcommand does not work on Windows and the Macintosh, as those platforms have their own way of unposting menus. *pathName* **xposition** *index* Returns a decimal string giving the x-coordinate within the menu window of the leftmost pixel in the entry specified by *index*. *pathName* **yposition** *index* Returns a decimal string giving the y-coordinate within the menu window of the topmost pixel in the entry specified by *index*. Menu entry options ------------------ The following options are allowed on menu entries. Most options are not supported by all entry types. **-activebackground** *value* Specifies a background color to use for displaying this entry when it is active. If this option is specified as an empty string (the default), then the **-activebackground** option for the overall menu is used. If the **tk\_strictMotif** variable has been set to request strict Motif compliance, then this option is ignored and the **-background** option is used in its place. This option is not available for separator or tear-off entries. **-activeforeground** *value* Specifies a foreground color to use for displaying this entry when it is active. If this option is specified as an empty string (the default), then the **-activeforeground** option for the overall menu is used. This option is not available for separator or tear-off entries. **-accelerator** *value* Specifies a string to display at the right side of the menu entry. Normally describes an accelerator keystroke sequence that may be typed to invoke the same function as the menu entry. This option is not available for separator or tear-off entries. **-background** *value* Specifies a background color to use for displaying this entry when it is in the normal state (neither active nor disabled). If this option is specified as an empty string (the default), then the **-background** option for the overall menu is used. This option is not available for separator or tear-off entries. **-bitmap** *value* Specifies a bitmap to display in the menu instead of a textual label, in any of the forms accepted by **[Tk\_GetBitmap](https://www.tcl.tk/man/tcl/TkLib/GetBitmap.htm)**. This option overrides the **-label** option (as controlled by the **-compound** option) but may be reset to an empty string to enable a textual label to be displayed. If a **-image** option has been specified, it overrides **-bitmap**. This option is not available for separator or tear-off entries. **-columnbreak** *value* When this option is zero, the entry appears below the previous entry. When this option is one, the entry appears at the top of a new column in the menu. This option is ignored on Aqua/Mac OS X, where menus are always a single column. **-command** *value* Specifies a Tcl command to execute when the menu entry is invoked. Not available for separator or tear-off entries. **-compound** *value* Specifies whether the menu entry should display both an image and text, and if so, where the image should be placed relative to the text. Valid values for this option are **bottom**, **center**, **left**, **none**, **right** and **top**. The default value is **none**, meaning that the button will display either an image or text, depending on the values of the **-image** and **-bitmap** options. **-font** *value* Specifies the font to use when drawing the label or accelerator string in this entry. If this option is specified as an empty string (the default) then the **-font** option for the overall menu is used. This option is not available for separator or tear-off entries. **-foreground** *value* Specifies a foreground color to use for displaying this entry when it is in the normal state (neither active nor disabled). If this option is specified as an empty string (the default), then the **-foreground** option for the overall menu is used. This option is not available for separator or tear-off entries. **-hidemargin** *value* Specifies whether the standard margins should be drawn for this menu entry. This is useful when creating palette with images in them, i.e., color palettes, pattern palettes, etc. 1 indicates that the margin for the entry is hidden; 0 means that the margin is used. **-image** *value* Specifies an image to display in the menu instead of a text string or bitmap. The image must have been created by some previous invocation of **[image create](image.htm)**. This option overrides the **-label** and **-bitmap** options (as controlled by the **-compound** option) but may be reset to an empty string to enable a textual or bitmap label to be displayed. This option is not available for separator or tear-off entries. **-indicatoron** *value* Available only for checkbutton and radiobutton entries. *Value* is a boolean that determines whether or not the indicator should be displayed. **-label** *value* Specifies a string to display as an identifying label in the menu entry. Not available for separator or tear-off entries. **-menu** *value* Available only for cascade entries. Specifies the path name of the submenu associated with this entry. The submenu must be a child of the menu. **-offvalue** *value* Available only for checkbutton entries. Specifies the value to store in the entry's associated variable when the entry is deselected. **-onvalue** *value* Available only for checkbutton entries. Specifies the value to store in the entry's associated variable when the entry is selected. **-selectcolor** *value* Available only for checkbutton and radiobutton entries. Specifies the color to display in the indicator when the entry is selected. If the value is an empty string (the default) then the **-selectcolor** option for the menu determines the indicator color. **-selectimage** *value* Available only for checkbutton and radiobutton entries. Specifies an image to display in the entry (in place of the **-image** option) when it is selected. *Value* is the name of an image, which must have been created by some previous invocation of **[image create](image.htm)**. This option is ignored unless the **-image** option has been specified. **-state** *value* Specifies one of three states for the entry: **normal**, **active**, or **disabled**. In normal state the entry is displayed using the **-foreground** option for the menu and the **-background** option from the entry or the menu. The active state is typically used when the pointer is over the entry. In active state the entry is displayed using the **-activeforeground** option for the menu along with the **-activebackground** option from the entry. Disabled state means that the entry should be insensitive: the default bindings will refuse to activate or invoke the entry. In this state the entry is displayed according to the **-disabledforeground** option for the menu and the **-background** option from the entry. This option is not available for separator entries. **-underline** *value* Specifies the integer index of a character to underline in the entry. This option is also queried by the default bindings and used to implement keyboard traversal. 0 corresponds to the first character of the text displayed in the entry, 1 to the next character, and so on. If a bitmap or image is displayed in the entry then this option is ignored. This option is not available for separator or tear-off entries. **-value** *value* Available only for radiobutton entries. Specifies the value to store in the entry's associated variable when the entry is selected. If an empty string is specified, then the **-label** option for the entry as the value to store in the variable. **-variable** *value* Available only for checkbutton and radiobutton entries. Specifies the name of a global variable to set when the entry is selected. For checkbutton entries the variable is also set when the entry is deselected. For radiobutton entries, changing the variable causes the currently-selected entry to deselect itself. For checkbutton entries, the default value of this option is taken from the **-label** option, and for radiobutton entries a single fixed value is used. It is recommended that you always set the **-variable** option when creating either a checkbutton or a radiobutton. Menu configurations ------------------- The default bindings support four different ways of using menus: **Pulldown Menus in Menubar** This is the most common case. You create a menu widget that will become the menu bar. You then add cascade entries to this menu, specifying the pull down menus you wish to use in your menu bar. You then create all of the pulldowns. Once you have done this, specify the menu using the **-menu** option of the toplevel's widget command. See the **[toplevel](toplevel.htm)** manual entry for details. **Pulldown Menus in Menu Buttons** This is the compatible way to do menu bars. You create one menubutton widget for each top-level menu, and typically you arrange a series of menubuttons in a row in a menubar window. You also create the top-level menus and any cascaded submenus, and tie them together with **-menu** options in menubuttons and cascade menu entries. The top-level menu must be a child of the menubutton, and each submenu must be a child of the menu that refers to it. Once you have done this, the default bindings will allow users to traverse and invoke the tree of menus via its menubutton; see the **[menubutton](menubutton.htm)** manual entry for details. **Popup Menus** Popup menus typically post in response to a mouse button press or keystroke. You create the popup menus and any cascaded submenus, then you call the **[tk\_popup](popup.htm)** procedure at the appropriate time to post the top-level menu. **Option Menus** An option menu consists of a menubutton with an associated menu that allows you to select one of several values. The current value is displayed in the menubutton and is also stored in a global variable. Use the **tk\_optionMenu** procedure to create option menubuttons and their menus. **Torn-off Menus** You create a torn-off menu by invoking the tear-off entry at the top of an existing menu. The default bindings will create a new menu that is a copy of the original menu and leave it permanently posted as a top-level window. The torn-off menu behaves just the same as the original menu. Default bindings ---------------- Tk automatically creates class bindings for menus that give them the following default behavior: 1. When the mouse enters a menu, the entry underneath the mouse cursor activates; as the mouse moves around the menu, the active entry changes to track the mouse. 2. When the mouse leaves a menu all of the entries in the menu deactivate, except in the special case where the mouse moves from a menu to a cascaded submenu. 3. When a button is released over a menu, the active entry (if any) is invoked. The menu also unposts unless it is a torn-off menu. 4. The Space and Return keys invoke the active entry and unpost the menu. 5. If any of the entries in a menu have letters underlined with the **-underline** option, then pressing one of the underlined letters (or its upper-case or lower-case equivalent) invokes that entry and unposts the menu. 6. The Escape key aborts a menu selection in progress without invoking any entry. It also unposts the menu unless it is a torn-off menu. 7. The Up and Down keys activate the next higher or lower entry in the menu. When one end of the menu is reached, the active entry wraps around to the other end. 8. The Left key moves to the next menu to the left. If the current menu is a cascaded submenu, then the submenu is unposted and the current menu entry becomes the cascade entry in the parent. If the current menu is a top-level menu posted from a menubutton, then the current menubutton is unposted and the next menubutton to the left is posted. Otherwise the key has no effect. The left-right order of menubuttons is determined by their stacking order: Tk assumes that the lowest menubutton (which by default is the first one created) is on the left. 9. The Right key moves to the next menu to the right. If the current entry is a cascade entry, then the submenu is posted and the current menu entry becomes the first entry in the submenu. Otherwise, if the current menu was posted from a menubutton, then the current menubutton is unposted and the next menubutton to the right is posted. Disabled menu entries are non-responsive: they do not activate and they ignore mouse button presses and releases. Several of the bindings make use of the command **tk\_menuSetFocus**. It saves the current focus and sets the focus to its *pathName* argument, which is a menu widget. The behavior of menus can be changed by defining new bindings for individual widgets or by redefining the class bindings. Bugs ---- At present it is not possible to use the option database to specify values for the options to individual entries. See also -------- **[bind](bind.htm)**, **[menubutton](menubutton.htm)**, **[ttk::menubutton](ttk_menubutton.htm)**, **[toplevel](toplevel.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/menu.htm>
programming_docs
tcl_tk ttk_style ttk\_style ========== [NAME](ttk_style.htm#M2) ttk::style — Manipulate style database [SYNOPSIS](ttk_style.htm#M3) [NOTES](ttk_style.htm#M4) [DEFINITIONS](ttk_style.htm#M5) [DESCRIPTION](ttk_style.htm#M6) [**ttk::style configure** *style* ?*-option* ?*value option value...*? ?](ttk_style.htm#M7) [**ttk::style map** *style* ?*-option* **{** *statespec value...* **}**?](ttk_style.htm#M8) [**ttk::style lookup** *style* *-option* ?*state* ?*default*??](ttk_style.htm#M9) [**ttk::style layout** *style* ?*layoutSpec*?](ttk_style.htm#M10) [**ttk::style element create** *elementName* *type* ?*args...*?](ttk_style.htm#M11) [**ttk::style element names**](ttk_style.htm#M12) [**ttk::style element options** *element*](ttk_style.htm#M13) [**ttk::style theme create** *themeName* ?**-parent** *basedon*? ?**-settings** *script...* ?](ttk_style.htm#M14) [**ttk::style theme settings** *themeName* *script*](ttk_style.htm#M15) [**ttk::style theme names**](ttk_style.htm#M16) [**ttk::style theme use** ?*themeName*?](ttk_style.htm#M17) [LAYOUTS](ttk_style.htm#M18) [**-side** *side*](ttk_style.htm#M19) [**-sticky** **[***nswe***]**](ttk_style.htm#M20) [**-children {** *sublayout...* **}**](ttk_style.htm#M21) [SEE ALSO](ttk_style.htm#M22) [KEYWORDS](ttk_style.htm#M23) Name ---- ttk::style — Manipulate style database Synopsis -------- **ttk::style** *option* ?*args*? Notes ----- See also the Tcl'2004 conference presentation, available at <http://tktable.sourceforge.net/tile/tile>-tcl2004.pdf Definitions ----------- Each widget is assigned a *style*, which specifies the set of elements making up the widget and how they are arranged, along with dynamic and default settings for element options. By default, the style name is the same as the widget's class; this may be overridden by the **-style** option. A *theme* is a collection of elements and styles which controls the overall look and feel of an application. Description ----------- The **ttk::style** command takes the following arguments: **ttk::style configure** *style* ?*-option* ?*value option value...*? ? Sets the default value of the specified option(s) in *style*. **ttk::style map** *style* ?*-option* **{** *statespec value...* **}**? Sets dynamic values of the specified option(s) in *style*. Each *statespec / value* pair is examined in order; the value corresponding to the first matching *statespec* is used. **ttk::style lookup** *style* *-option* ?*state* ?*default*?? Returns the value specified for *-option* in style *style* in state *state*, using the standard lookup rules for element options. *state* is a list of state names; if omitted, it defaults to all bits off (the “normal” state). If the *default* argument is present, it is used as a fallback value in case no specification for *-option* is found. **ttk::style layout** *style* ?*layoutSpec*? Define the widget layout for style *style*. See **[LAYOUTS](#M18)** below for the format of *layoutSpec*. If *layoutSpec* is omitted, return the layout specification for style *style*. **ttk::style element create** *elementName* *type* ?*args...*? Creates a new element in the current theme of type *type*. The only cross-platform built-in element type is *image* (see **[ttk\_image](ttk_image.htm)**(n)) but themes may define other element types (see **Ttk\_RegisterElementFactory**). On suitable versions of Windows an element factory is registered to create Windows theme elements (see **[ttk\_vsapi](ttk_vsapi.htm)**(n)). **ttk::style element names** Returns the list of elements defined in the current theme. **ttk::style element options** *element* Returns the list of *element*'s options. **ttk::style theme create** *themeName* ?**-parent** *basedon*? ?**-settings** *script...* ? Creates a new theme. It is an error if *themeName* already exists. If **-parent** is specified, the new theme will inherit styles, elements, and layouts from the parent theme *basedon*. If **-settings** is present, *script* is evaluated in the context of the new theme as per **ttk::style theme settings**. **ttk::style theme settings** *themeName* *script* Temporarily sets the current theme to *themeName*, evaluate *script*, then restore the previous theme. Typically *script* simply defines styles and elements, though arbitrary Tcl code may appear. **ttk::style theme names** Returns a list of all known themes. **ttk::style theme use** ?*themeName*? Without an argument the result is the name of the current theme. Otherwise this command sets the current theme to *themeName*, and refreshes all widgets. Layouts ------- A *layout* specifies a list of elements, each followed by one or more options specifying how to arrange the element. The layout mechanism uses a simplified version of the **[pack](pack.htm)** geometry manager: given an initial cavity, each element is allocated a parcel. Valid options are: **-side** *side* Specifies which side of the cavity to place the element; one of **left**, **right**, **top**, or **bottom**. If omitted, the element occupies the entire cavity. **-sticky** **[***nswe***]** Specifies where the element is placed inside its allocated parcel. **-children {** *sublayout...* **}** Specifies a list of elements to place inside the element. For example: ``` ttk::style layout Horizontal.TScrollbar { Scrollbar.trough -children { Scrollbar.leftarrow -side left Scrollbar.rightarrow -side right Horizontal.Scrollbar.thumb -side left -sticky ew } } ``` See also -------- **[ttk::intro](ttk_intro.htm)**, **[ttk::widget](ttk_widget.htm)**, **[photo](photo.htm)**, **[ttk\_image](ttk_image.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/ttk_style.htm> tcl_tk ttk_intro ttk\_intro ========== Name ---- ttk::intro — Introduction to the Tk theme engine Overview -------- The Tk themed widget set is based on a revised and enhanced version of TIP #48 (<http://tip.tcl.tk/48>) specified style engine. The main concepts are described below. The basic idea is to separate, to the extent possible, the code implementing a widget's behavior from the code implementing its appearance. Widget class bindings are primarily responsible for maintaining the widget state and invoking callbacks; all aspects of the widget's appearance are controlled by the style of the widget (i.e. the style of the elements of the widget). Themes ------ A *theme* is a collection of elements and styles that determine the look and feel of the widget set. Themes can be used to: * isolate platform differences (X11 vs. classic Windows vs. XP vs. Aqua ...) * adapt to display limitations (low-color, grayscale, monochrome, tiny screens) * accessibility (high contrast, large type) * application suite branding * blend in with the rest of the desktop (Gnome, KDE, Java) * and, of course: eye candy. Elements -------- An *element* displays an individual part of a widget. For example, a vertical scrollbar widget contains **uparrow**, **downarrow**, **trough** and **slider** elements. Element names use a recursive dotted notation. For example, **uparrow** identifies a generic arrow element, and **Scrollbar.uparrow** and **Combobox.uparrow** identify widget-specific elements. When looking for an element, the style engine looks for the specific name first, and if an element of that name is not found it looks for generic elements by stripping off successive leading components of the element name. Like widgets, elements have *options* which specify what to display and how to display it. For example, the **text** element (which displays a text string) has **-text**, **-font**, **-foreground**, **-background**, **-underline**, and **-width** options. The value of an element option is taken from: * an option of the same name and type in the widget containing the element; * a dynamic setting specified by **[style map](ttk_style.htm)** and the current state; * the default setting specified by **style configure**; or * the element's built-in default value for the option. Layouts ------- A *layout* specifies which elements make up a widget and how they are arranged. The layout engine uses a simplified version of the **[pack](pack.htm)** algorithm: starting with an initial cavity equal to the size of the widget, elements are allocated a parcel within the cavity along the side specified by the **-side** option, and placed within the parcel according to the **-sticky** option. For example, the layout for a horizontal scrollbar is: ``` ttk::**style layout** Horizontal.TScrollbar { Scrollbar.trough -children { Scrollbar.leftarrow -side left -sticky w Scrollbar.rightarrow -side right -sticky e Scrollbar.thumb -side left -expand true -sticky ew } } ``` By default, the layout for a widget is the same as its class name. Some widgets may override this (for example, the **[ttk::scrollbar](ttk_scrollbar.htm)** widget chooses different layouts based on the **-orient** option). States ------ In standard Tk, many widgets have a **-state** option which (in most cases) is either **normal** or **disabled**. Some widgets support additional states, such as the **[entry](entry.htm)** widget which has a **readonly** state and the various flavors of buttons which have **active** state. The themed Tk widgets generalizes this idea: every widget has a bitmap of independent state flags. Widget state flags include **active**, **disabled**, **pressed**, **focus**, etc., (see *ttk::widget(n)* for the full list of state flags). Instead of a **-state** option, every widget now has a **state** widget command which is used to set or query the state. A *state specification* is a list of symbolic state names indicating which bits are set, each optionally prefixed with an exclamation point indicating that the bit is cleared instead. For example, the class bindings for the **[ttk::button](ttk_button.htm)** widget are: ``` bind TButton <Enter> { %W state active } bind TButton <Leave> { %W state !active } bind TButton <ButtonPress-1> { %W state pressed } bind TButton <Button1-Leave> { %W state !pressed } bind TButton <Button1-Enter> { %W state pressed } bind TButton <ButtonRelease-1> \ { %W instate {pressed} { %W state !pressed ; %W invoke } } ``` This specifies that the widget becomes **active** when the pointer enters the widget, and inactive when it leaves. Similarly it becomes **pressed** when the mouse button is pressed, and **!pressed** on the ButtonRelease event. In addition, the button unpresses if pointer is dragged outside the widget while Button-1 is held down, and represses if it's dragged back in. Finally, when the mouse button is released, the widget's **-command** is invoked, but only if the button is currently in the **pressed** state. (The actual bindings are a little more complicated than the above, but not by much). Styles ------ Each widget is associated with a *style*, which specifies values for element options. Style names use a recursive dotted notation like layouts and elements; by default, widgets use the class name to look up a style in the current theme. For example: ``` ttk::**style configure** TButton \ -background #d9d9d9 \ -foreground black \ -relief raised \ ; ``` Many elements are displayed differently depending on the widget state. For example, buttons have a different background when they are active, a different foreground when disabled, and a different relief when pressed. The **[style map](ttk_style.htm)** command specifies dynamic option settings for a particular style: ``` ttk::**[style map](ttk_style.htm)** TButton \ -background [list disabled #d9d9d9 active #ececec] \ -foreground [list disabled #a3a3a3] \ -relief [list {pressed !disabled} sunken] \ ; ``` See also -------- **[ttk::widget](ttk_widget.htm)**, **[ttk::style](ttk_style.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/ttk_intro.htm> tcl_tk colors colors ====== Name ---- colors — symbolic color names recognized by Tk Description ----------- Tk recognizes many symbolic color names (e.g., **red**) when specifying colors. The symbolic names recognized by Tk and their 8-bit-per-channel RGB values are: | | | | | | --- | --- | --- | --- | | **Name** | **Red** | **Green** | **Blue** | | alice blue | 240 | 248 | 255 | | AliceBlue | 240 | 248 | 255 | | antique white | 250 | 235 | 215 | | AntiqueWhite | 250 | 235 | 215 | | AntiqueWhite1 | 255 | 239 | 219 | | AntiqueWhite2 | 238 | 223 | 204 | | AntiqueWhite3 | 205 | 192 | 176 | | AntiqueWhite4 | 139 | 131 | 120 | | agua | 0 | 255 | 255 | | aquamarine | 127 | 255 | 212 | | aquamarine1 | 127 | 255 | 212 | | aquamarine2 | 118 | 238 | 198 | | aquamarine3 | 102 | 205 | 170 | | aquamarine4 | 69 | 139 | 116 | | azure | 240 | 255 | 255 | | azure1 | 240 | 255 | 255 | | azure2 | 224 | 238 | 238 | | azure3 | 193 | 205 | 205 | | azure4 | 131 | 139 | 139 | | beige | 245 | 245 | 220 | | bisque | 255 | 228 | 196 | | bisque1 | 255 | 228 | 196 | | bisque2 | 238 | 213 | 183 | | bisque3 | 205 | 183 | 158 | | bisque4 | 139 | 125 | 107 | | black | 0 | 0 | 0 | | blanched almond | 255 | 235 | 205 | | BlanchedAlmond | 255 | 235 | 205 | | blue | 0 | 0 | 255 | | blue violet | 138 | 43 | 226 | | blue1 | 0 | 0 | 255 | | blue2 | 0 | 0 | 238 | | blue3 | 0 | 0 | 205 | | blue4 | 0 | 0 | 139 | | BlueViolet | 138 | 43 | 226 | | brown | 165 | 42 | 42 | | brown1 | 255 | 64 | 64 | | brown2 | 238 | 59 | 59 | | brown3 | 205 | 51 | 51 | | brown4 | 139 | 35 | 35 | | burlywood | 222 | 184 | 135 | | burlywood1 | 255 | 211 | 155 | | burlywood2 | 238 | 197 | 145 | | burlywood3 | 205 | 170 | 125 | | burlywood4 | 139 | 115 | 85 | | cadet blue | 95 | 158 | 160 | | CadetBlue | 95 | 158 | 160 | | CadetBlue1 | 152 | 245 | 255 | | CadetBlue2 | 142 | 229 | 238 | | CadetBlue3 | 122 | 197 | 205 | | CadetBlue4 | 83 | 134 | 139 | | chartreuse | 127 | 255 | 0 | | chartreuse1 | 127 | 255 | 0 | | chartreuse2 | 118 | 238 | 0 | | chartreuse3 | 102 | 205 | 0 | | chartreuse4 | 69 | 139 | 0 | | chocolate | 210 | 105 | 30 | | chocolate1 | 255 | 127 | 36 | | chocolate2 | 238 | 118 | 33 | | chocolate3 | 205 | 102 | 29 | | chocolate4 | 139 | 69 | 19 | | coral | 255 | 127 | 80 | | coral1 | 255 | 114 | 86 | | coral2 | 238 | 106 | 80 | | coral3 | 205 | 91 | 69 | | coral4 | 139 | 62 | 47 | | cornflower blue | 100 | 149 | 237 | | CornflowerBlue | 100 | 149 | 237 | | cornsilk | 255 | 248 | 220 | | cornsilk1 | 255 | 248 | 220 | | cornsilk2 | 238 | 232 | 205 | | cornsilk3 | 205 | 200 | 177 | | cornsilk4 | 139 | 136 | 120 | | crymson | 220 | 20 | 60 | | cyan | 0 | 255 | 255 | | cyan1 | 0 | 255 | 255 | | cyan2 | 0 | 238 | 238 | | cyan3 | 0 | 205 | 205 | | cyan4 | 0 | 139 | 139 | | dark blue | 0 | 0 | 139 | | dark cyan | 0 | 139 | 139 | | dark goldenrod | 184 | 134 | 11 | | dark gray | 169 | 169 | 169 | | dark green | 0 | 100 | 0 | | dark grey | 169 | 169 | 169 | | dark khaki | 189 | 183 | 107 | | dark magenta | 139 | 0 | 139 | | dark olive green | 85 | 107 | 47 | | dark orange | 255 | 140 | 0 | | dark orchid | 153 | 50 | 204 | | dark red | 139 | 0 | 0 | | dark salmon | 233 | 150 | 122 | | dark sea green | 143 | 188 | 143 | | dark slate blue | 72 | 61 | 139 | | dark slate gray | 47 | 79 | 79 | | dark slate grey | 47 | 79 | 79 | | dark turquoise | 0 | 206 | 209 | | dark violet | 148 | 0 | 211 | | DarkBlue | 0 | 0 | 139 | | DarkCyan | 0 | 139 | 139 | | DarkGoldenrod | 184 | 134 | 11 | | DarkGoldenrod1 | 255 | 185 | 15 | | DarkGoldenrod2 | 238 | 173 | 14 | | DarkGoldenrod3 | 205 | 149 | 12 | | DarkGoldenrod4 | 139 | 101 | 8 | | DarkGray | 169 | 169 | 169 | | DarkGreen | 0 | 100 | 0 | | DarkGrey | 169 | 169 | 169 | | DarkKhaki | 189 | 183 | 107 | | DarkMagenta | 139 | 0 | 139 | | DarkOliveGreen | 85 | 107 | 47 | | DarkOliveGreen1 | 202 | 255 | 112 | | DarkOliveGreen2 | 188 | 238 | 104 | | DarkOliveGreen3 | 162 | 205 | 90 | | DarkOliveGreen4 | 110 | 139 | 61 | | DarkOrange | 255 | 140 | 0 | | DarkOrange1 | 255 | 127 | 0 | | DarkOrange2 | 238 | 118 | 0 | | DarkOrange3 | 205 | 102 | 0 | | DarkOrange4 | 139 | 69 | 0 | | DarkOrchid | 153 | 50 | 204 | | DarkOrchid1 | 191 | 62 | 255 | | DarkOrchid2 | 178 | 58 | 238 | | DarkOrchid3 | 154 | 50 | 205 | | DarkOrchid4 | 104 | 34 | 139 | | DarkRed | 139 | 0 | 0 | | DarkSalmon | 233 | 150 | 122 | | DarkSeaGreen | 143 | 188 | 143 | | DarkSeaGreen1 | 193 | 255 | 193 | | DarkSeaGreen2 | 180 | 238 | 180 | | DarkSeaGreen3 | 155 | 205 | 155 | | DarkSeaGreen4 | 105 | 139 | 105 | | DarkSlateBlue | 72 | 61 | 139 | | DarkSlateGray | 47 | 79 | 79 | | DarkSlateGray1 | 151 | 255 | 255 | | DarkSlateGray2 | 141 | 238 | 238 | | DarkSlateGray3 | 121 | 205 | 205 | | DarkSlateGray4 | 82 | 139 | 139 | | DarkSlateGrey | 47 | 79 | 79 | | DarkTurquoise | 0 | 206 | 209 | | DarkViolet | 148 | 0 | 211 | | deep pink | 255 | 20 | 147 | | deep sky blue | 0 | 191 | 255 | | DeepPink | 255 | 20 | 147 | | DeepPink1 | 255 | 20 | 147 | | DeepPink2 | 238 | 18 | 137 | | DeepPink3 | 205 | 16 | 118 | | DeepPink4 | 139 | 10 | 80 | | DeepSkyBlue | 0 | 191 | 255 | | DeepSkyBlue1 | 0 | 191 | 255 | | DeepSkyBlue2 | 0 | 178 | 238 | | DeepSkyBlue3 | 0 | 154 | 205 | | DeepSkyBlue4 | 0 | 104 | 139 | | dim gray | 105 | 105 | 105 | | dim grey | 105 | 105 | 105 | | DimGray | 105 | 105 | 105 | | DimGrey | 105 | 105 | 105 | | dodger blue | 30 | 144 | 255 | | DodgerBlue | 30 | 144 | 255 | | DodgerBlue1 | 30 | 144 | 255 | | DodgerBlue2 | 28 | 134 | 238 | | DodgerBlue3 | 24 | 116 | 205 | | DodgerBlue4 | 16 | 78 | 139 | | firebrick | 178 | 34 | 34 | | firebrick1 | 255 | 48 | 48 | | firebrick2 | 238 | 44 | 44 | | firebrick3 | 205 | 38 | 38 | | firebrick4 | 139 | 26 | 26 | | floral white | 255 | 250 | 240 | | FloralWhite | 255 | 250 | 240 | | forest green | 34 | 139 | 34 | | ForestGreen | 34 | 139 | 34 | | fuchsia | 255 | 0 | 255 | | gainsboro | 220 | 220 | 220 | | ghost white | 248 | 248 | 255 | | GhostWhite | 248 | 248 | 255 | | gold | 255 | 215 | 0 | | gold1 | 255 | 215 | 0 | | gold2 | 238 | 201 | 0 | | gold3 | 205 | 173 | 0 | | gold4 | 139 | 117 | 0 | | goldenrod | 218 | 165 | 32 | | goldenrod1 | 255 | 193 | 37 | | goldenrod2 | 238 | 180 | 34 | | goldenrod3 | 205 | 155 | 29 | | goldenrod4 | 139 | 105 | 20 | | gray | 128 | 128 | 128 | | gray0 | 0 | 0 | 0 | | gray1 | 3 | 3 | 3 | | gray2 | 5 | 5 | 5 | | gray3 | 8 | 8 | 8 | | gray4 | 10 | 10 | 10 | | gray5 | 13 | 13 | 13 | | gray6 | 15 | 15 | 15 | | gray7 | 18 | 18 | 18 | | gray8 | 20 | 20 | 20 | | gray9 | 23 | 23 | 23 | | gray10 | 26 | 26 | 26 | | gray11 | 28 | 28 | 28 | | gray12 | 31 | 31 | 31 | | gray13 | 33 | 33 | 33 | | gray14 | 36 | 36 | 36 | | gray15 | 38 | 38 | 38 | | gray16 | 41 | 41 | 41 | | gray17 | 43 | 43 | 43 | | gray18 | 46 | 46 | 46 | | gray19 | 48 | 48 | 48 | | gray20 | 51 | 51 | 51 | | gray21 | 54 | 54 | 54 | | gray22 | 56 | 56 | 56 | | gray23 | 59 | 59 | 59 | | gray24 | 61 | 61 | 61 | | gray25 | 64 | 64 | 64 | | gray26 | 66 | 66 | 66 | | gray27 | 69 | 69 | 69 | | gray28 | 71 | 71 | 71 | | gray29 | 74 | 74 | 74 | | gray30 | 77 | 77 | 77 | | gray31 | 79 | 79 | 79 | | gray32 | 82 | 82 | 82 | | gray33 | 84 | 84 | 84 | | gray34 | 87 | 87 | 87 | | gray35 | 89 | 89 | 89 | | gray36 | 92 | 92 | 92 | | gray37 | 94 | 94 | 94 | | gray38 | 97 | 97 | 97 | | gray39 | 99 | 99 | 99 | | gray40 | 102 | 102 | 102 | | gray41 | 105 | 105 | 105 | | gray42 | 107 | 107 | 107 | | gray43 | 110 | 110 | 110 | | gray44 | 112 | 112 | 112 | | gray45 | 115 | 115 | 115 | | gray46 | 117 | 117 | 117 | | gray47 | 120 | 120 | 120 | | gray48 | 122 | 122 | 122 | | gray49 | 125 | 125 | 125 | | gray50 | 127 | 127 | 127 | | gray51 | 130 | 130 | 130 | | gray52 | 133 | 133 | 133 | | gray53 | 135 | 135 | 135 | | gray54 | 138 | 138 | 138 | | gray55 | 140 | 140 | 140 | | gray56 | 143 | 143 | 143 | | gray57 | 145 | 145 | 145 | | gray58 | 148 | 148 | 148 | | gray59 | 150 | 150 | 150 | | gray60 | 153 | 153 | 153 | | gray61 | 156 | 156 | 156 | | gray62 | 158 | 158 | 158 | | gray63 | 161 | 161 | 161 | | gray64 | 163 | 163 | 163 | | gray65 | 166 | 166 | 166 | | gray66 | 168 | 168 | 168 | | gray67 | 171 | 171 | 171 | | gray68 | 173 | 173 | 173 | | gray69 | 176 | 176 | 176 | | gray70 | 179 | 179 | 179 | | gray71 | 181 | 181 | 181 | | gray72 | 184 | 184 | 184 | | gray73 | 186 | 186 | 186 | | gray74 | 189 | 189 | 189 | | gray75 | 191 | 191 | 191 | | gray76 | 194 | 194 | 194 | | gray77 | 196 | 196 | 196 | | gray78 | 199 | 199 | 199 | | gray79 | 201 | 201 | 201 | | gray80 | 204 | 204 | 204 | | gray81 | 207 | 207 | 207 | | gray82 | 209 | 209 | 209 | | gray83 | 212 | 212 | 212 | | gray84 | 214 | 214 | 214 | | gray85 | 217 | 217 | 217 | | gray86 | 219 | 219 | 219 | | gray87 | 222 | 222 | 222 | | gray88 | 224 | 224 | 224 | | gray89 | 227 | 227 | 227 | | gray90 | 229 | 229 | 229 | | gray91 | 232 | 232 | 232 | | gray92 | 235 | 235 | 235 | | gray93 | 237 | 237 | 237 | | gray94 | 240 | 240 | 240 | | gray95 | 242 | 242 | 242 | | gray96 | 245 | 245 | 245 | | gray97 | 247 | 247 | 247 | | gray98 | 250 | 250 | 250 | | gray99 | 252 | 252 | 252 | | gray100 | 255 | 255 | 255 | | green | 0 | 128 | 0 | | green yellow | 173 | 255 | 47 | | green1 | 0 | 255 | 0 | | green2 | 0 | 238 | 0 | | green3 | 0 | 205 | 0 | | green4 | 0 | 139 | 0 | | GreenYellow | 173 | 255 | 47 | | grey | 128 | 128 | 128 | | grey0 | 0 | 0 | 0 | | grey1 | 3 | 3 | 3 | | grey2 | 5 | 5 | 5 | | grey3 | 8 | 8 | 8 | | grey4 | 10 | 10 | 10 | | grey5 | 13 | 13 | 13 | | grey6 | 15 | 15 | 15 | | grey7 | 18 | 18 | 18 | | grey8 | 20 | 20 | 20 | | grey9 | 23 | 23 | 23 | | grey10 | 26 | 26 | 26 | | grey11 | 28 | 28 | 28 | | grey12 | 31 | 31 | 31 | | grey13 | 33 | 33 | 33 | | grey14 | 36 | 36 | 36 | | grey15 | 38 | 38 | 38 | | grey16 | 41 | 41 | 41 | | grey17 | 43 | 43 | 43 | | grey18 | 46 | 46 | 46 | | grey19 | 48 | 48 | 48 | | grey20 | 51 | 51 | 51 | | grey21 | 54 | 54 | 54 | | grey22 | 56 | 56 | 56 | | grey23 | 59 | 59 | 59 | | grey24 | 61 | 61 | 61 | | grey25 | 64 | 64 | 64 | | grey26 | 66 | 66 | 66 | | grey27 | 69 | 69 | 69 | | grey28 | 71 | 71 | 71 | | grey29 | 74 | 74 | 74 | | grey30 | 77 | 77 | 77 | | grey31 | 79 | 79 | 79 | | grey32 | 82 | 82 | 82 | | grey33 | 84 | 84 | 84 | | grey34 | 87 | 87 | 87 | | grey35 | 89 | 89 | 89 | | grey36 | 92 | 92 | 92 | | grey37 | 94 | 94 | 94 | | grey38 | 97 | 97 | 97 | | grey39 | 99 | 99 | 99 | | grey40 | 102 | 102 | 102 | | grey41 | 105 | 105 | 105 | | grey42 | 107 | 107 | 107 | | grey43 | 110 | 110 | 110 | | grey44 | 112 | 112 | 112 | | grey45 | 115 | 115 | 115 | | grey46 | 117 | 117 | 117 | | grey47 | 120 | 120 | 120 | | grey48 | 122 | 122 | 122 | | grey49 | 125 | 125 | 125 | | grey50 | 127 | 127 | 127 | | grey51 | 130 | 130 | 130 | | grey52 | 133 | 133 | 133 | | grey53 | 135 | 135 | 135 | | grey54 | 138 | 138 | 138 | | grey55 | 140 | 140 | 140 | | grey56 | 143 | 143 | 143 | | grey57 | 145 | 145 | 145 | | grey58 | 148 | 148 | 148 | | grey59 | 150 | 150 | 150 | | grey60 | 153 | 153 | 153 | | grey61 | 156 | 156 | 156 | | grey62 | 158 | 158 | 158 | | grey63 | 161 | 161 | 161 | | grey64 | 163 | 163 | 163 | | grey65 | 166 | 166 | 166 | | grey66 | 168 | 168 | 168 | | grey67 | 171 | 171 | 171 | | grey68 | 173 | 173 | 173 | | grey69 | 176 | 176 | 176 | | grey70 | 179 | 179 | 179 | | grey71 | 181 | 181 | 181 | | grey72 | 184 | 184 | 184 | | grey73 | 186 | 186 | 186 | | grey74 | 189 | 189 | 189 | | grey75 | 191 | 191 | 191 | | grey76 | 194 | 194 | 194 | | grey77 | 196 | 196 | 196 | | grey78 | 199 | 199 | 199 | | grey79 | 201 | 201 | 201 | | grey80 | 204 | 204 | 204 | | grey81 | 207 | 207 | 207 | | grey82 | 209 | 209 | 209 | | grey83 | 212 | 212 | 212 | | grey84 | 214 | 214 | 214 | | grey85 | 217 | 217 | 217 | | grey86 | 219 | 219 | 219 | | grey87 | 222 | 222 | 222 | | grey88 | 224 | 224 | 224 | | grey89 | 227 | 227 | 227 | | grey90 | 229 | 229 | 229 | | grey91 | 232 | 232 | 232 | | grey92 | 235 | 235 | 235 | | grey93 | 237 | 237 | 237 | | grey94 | 240 | 240 | 240 | | grey95 | 242 | 242 | 242 | | grey96 | 245 | 245 | 245 | | grey97 | 247 | 247 | 247 | | grey98 | 250 | 250 | 250 | | grey99 | 252 | 252 | 252 | | grey100 | 255 | 255 | 255 | | honeydew | 240 | 255 | 240 | | honeydew1 | 240 | 255 | 240 | | honeydew2 | 224 | 238 | 224 | | honeydew3 | 193 | 205 | 193 | | honeydew4 | 131 | 139 | 131 | | hot pink | 255 | 105 | 180 | | HotPink | 255 | 105 | 180 | | HotPink1 | 255 | 110 | 180 | | HotPink2 | 238 | 106 | 167 | | HotPink3 | 205 | 96 | 144 | | HotPink4 | 139 | 58 | 98 | | indian red | 205 | 92 | 92 | | IndianRed | 205 | 92 | 92 | | IndianRed1 | 255 | 106 | 106 | | IndianRed2 | 238 | 99 | 99 | | IndianRed3 | 205 | 85 | 85 | | IndianRed4 | 139 | 58 | 58 | | indigo | 75 | 0 | 130 | | ivory | 255 | 255 | 240 | | ivory1 | 255 | 255 | 240 | | ivory2 | 238 | 238 | 224 | | ivory3 | 205 | 205 | 193 | | ivory4 | 139 | 139 | 131 | | khaki | 240 | 230 | 140 | | khaki1 | 255 | 246 | 143 | | khaki2 | 238 | 230 | 133 | | khaki3 | 205 | 198 | 115 | | khaki4 | 139 | 134 | 78 | | lavender | 230 | 230 | 250 | | lavender blush | 255 | 240 | 245 | | LavenderBlush | 255 | 240 | 245 | | LavenderBlush1 | 255 | 240 | 245 | | LavenderBlush2 | 238 | 224 | 229 | | LavenderBlush3 | 205 | 193 | 197 | | LavenderBlush4 | 139 | 131 | 134 | | lawn green | 124 | 252 | 0 | | LawnGreen | 124 | 252 | 0 | | lemon chiffon | 255 | 250 | 205 | | LemonChiffon | 255 | 250 | 205 | | LemonChiffon1 | 255 | 250 | 205 | | LemonChiffon2 | 238 | 233 | 191 | | LemonChiffon3 | 205 | 201 | 165 | | LemonChiffon4 | 139 | 137 | 112 | | light blue | 173 | 216 | 230 | | light coral | 240 | 128 | 128 | | light cyan | 224 | 255 | 255 | | light goldenrod | 238 | 221 | 130 | | light goldenrod yellow | 250 | 250 | 210 | | light gray | 211 | 211 | 211 | | light green | 144 | 238 | 144 | | light grey | 211 | 211 | 211 | | light pink | 255 | 182 | 193 | | light salmon | 255 | 160 | 122 | | light sea green | 32 | 178 | 170 | | light sky blue | 135 | 206 | 250 | | light slate blue | 132 | 112 | 255 | | light slate gray | 119 | 136 | 153 | | light slate grey | 119 | 136 | 153 | | light steel blue | 176 | 196 | 222 | | light yellow | 255 | 255 | 224 | | LightBlue | 173 | 216 | 230 | | LightBlue1 | 191 | 239 | 255 | | LightBlue2 | 178 | 223 | 238 | | LightBlue3 | 154 | 192 | 205 | | LightBlue4 | 104 | 131 | 139 | | LightCoral | 240 | 128 | 128 | | LightCyan | 224 | 255 | 255 | | LightCyan1 | 224 | 255 | 255 | | LightCyan2 | 209 | 238 | 238 | | LightCyan3 | 180 | 205 | 205 | | LightCyan4 | 122 | 139 | 139 | | LightGoldenrod | 238 | 221 | 130 | | LightGoldenrod1 | 255 | 236 | 139 | | LightGoldenrod2 | 238 | 220 | 130 | | LightGoldenrod3 | 205 | 190 | 112 | | LightGoldenrod4 | 139 | 129 | 76 | | LightGoldenrodYellow | 250 | 250 | 210 | | LightGray | 211 | 211 | 211 | | LightGreen | 144 | 238 | 144 | | LightGrey | 211 | 211 | 211 | | LightPink | 255 | 182 | 193 | | LightPink1 | 255 | 174 | 185 | | LightPink2 | 238 | 162 | 173 | | LightPink3 | 205 | 140 | 149 | | LightPink4 | 139 | 95 | 101 | | LightSalmon | 255 | 160 | 122 | | LightSalmon1 | 255 | 160 | 122 | | LightSalmon2 | 238 | 149 | 114 | | LightSalmon3 | 205 | 129 | 98 | | LightSalmon4 | 139 | 87 | 66 | | LightSeaGreen | 32 | 178 | 170 | | LightSkyBlue | 135 | 206 | 250 | | LightSkyBlue1 | 176 | 226 | 255 | | LightSkyBlue2 | 164 | 211 | 238 | | LightSkyBlue3 | 141 | 182 | 205 | | LightSkyBlue4 | 96 | 123 | 139 | | LightSlateBlue | 132 | 112 | 255 | | LightSlateGray | 119 | 136 | 153 | | LightSlateGrey | 119 | 136 | 153 | | LightSteelBlue | 176 | 196 | 222 | | LightSteelBlue1 | 202 | 225 | 255 | | LightSteelBlue2 | 188 | 210 | 238 | | LightSteelBlue3 | 162 | 181 | 205 | | LightSteelBlue4 | 110 | 123 | 139 | | LightYellow | 255 | 255 | 224 | | LightYellow1 | 255 | 255 | 224 | | LightYellow2 | 238 | 238 | 209 | | LightYellow3 | 205 | 205 | 180 | | LightYellow4 | 139 | 139 | 122 | | lime | 0 | 255 | 0 | | lime green | 50 | 205 | 50 | | LimeGreen | 50 | 205 | 50 | | linen | 250 | 240 | 230 | | magenta | 255 | 0 | 255 | | magenta1 | 255 | 0 | 255 | | magenta2 | 238 | 0 | 238 | | magenta3 | 205 | 0 | 205 | | magenta4 | 139 | 0 | 139 | | maroon | 128 | 0 | 0 | | maroon1 | 255 | 52 | 179 | | maroon2 | 238 | 48 | 167 | | maroon3 | 205 | 41 | 144 | | maroon4 | 139 | 28 | 98 | | medium aquamarine | 102 | 205 | 170 | | medium blue | 0 | 0 | 205 | | medium orchid | 186 | 85 | 211 | | medium purple | 147 | 112 | 219 | | medium sea green | 60 | 179 | 113 | | medium slate blue | 123 | 104 | 238 | | medium spring green | 0 | 250 | 154 | | medium turquoise | 72 | 209 | 204 | | medium violet red | 199 | 21 | 133 | | MediumAquamarine | 102 | 205 | 170 | | MediumBlue | 0 | 0 | 205 | | MediumOrchid | 186 | 85 | 211 | | MediumOrchid1 | 224 | 102 | 255 | | MediumOrchid2 | 209 | 95 | 238 | | MediumOrchid3 | 180 | 82 | 205 | | MediumOrchid4 | 122 | 55 | 139 | | MediumPurple | 147 | 112 | 219 | | MediumPurple1 | 171 | 130 | 255 | | MediumPurple2 | 159 | 121 | 238 | | MediumPurple3 | 137 | 104 | 205 | | MediumPurple4 | 93 | 71 | 139 | | MediumSeaGreen | 60 | 179 | 113 | | MediumSlateBlue | 123 | 104 | 238 | | MediumSpringGreen | 0 | 250 | 154 | | MediumTurquoise | 72 | 209 | 204 | | MediumVioletRed | 199 | 21 | 133 | | midnight blue | 25 | 25 | 112 | | MidnightBlue | 25 | 25 | 112 | | mint cream | 245 | 255 | 250 | | MintCream | 245 | 255 | 250 | | misty rose | 255 | 228 | 225 | | MistyRose | 255 | 228 | 225 | | MistyRose1 | 255 | 228 | 225 | | MistyRose2 | 238 | 213 | 210 | | MistyRose3 | 205 | 183 | 181 | | MistyRose4 | 139 | 125 | 123 | | moccasin | 255 | 228 | 181 | | navajo white | 255 | 222 | 173 | | NavajoWhite | 255 | 222 | 173 | | NavajoWhite1 | 255 | 222 | 173 | | NavajoWhite2 | 238 | 207 | 161 | | NavajoWhite3 | 205 | 179 | 139 | | NavajoWhite4 | 139 | 121 | 94 | | navy | 0 | 0 | 128 | | navy blue | 0 | 0 | 128 | | NavyBlue | 0 | 0 | 128 | | old lace | 253 | 245 | 230 | | OldLace | 253 | 245 | 230 | | olive | 128 | 128 | 0 | | olive drab | 107 | 142 | 35 | | OliveDrab | 107 | 142 | 35 | | OliveDrab1 | 192 | 255 | 62 | | OliveDrab2 | 179 | 238 | 58 | | OliveDrab3 | 154 | 205 | 50 | | OliveDrab4 | 105 | 139 | 34 | | orange | 255 | 165 | 0 | | orange red | 255 | 69 | 0 | | orange1 | 255 | 165 | 0 | | orange2 | 238 | 154 | 0 | | orange3 | 205 | 133 | 0 | | orange4 | 139 | 90 | 0 | | OrangeRed | 255 | 69 | 0 | | OrangeRed1 | 255 | 69 | 0 | | OrangeRed2 | 238 | 64 | 0 | | OrangeRed3 | 205 | 55 | 0 | | OrangeRed4 | 139 | 37 | 0 | | orchid | 218 | 112 | 214 | | orchid1 | 255 | 131 | 250 | | orchid2 | 238 | 122 | 233 | | orchid3 | 205 | 105 | 201 | | orchid4 | 139 | 71 | 137 | | pale goldenrod | 238 | 232 | 170 | | pale green | 152 | 251 | 152 | | pale turquoise | 175 | 238 | 238 | | pale violet red | 219 | 112 | 147 | | PaleGoldenrod | 238 | 232 | 170 | | PaleGreen | 152 | 251 | 152 | | PaleGreen1 | 154 | 255 | 154 | | PaleGreen2 | 144 | 238 | 144 | | PaleGreen3 | 124 | 205 | 124 | | PaleGreen4 | 84 | 139 | 84 | | PaleTurquoise | 175 | 238 | 238 | | PaleTurquoise1 | 187 | 255 | 255 | | PaleTurquoise2 | 174 | 238 | 238 | | PaleTurquoise3 | 150 | 205 | 205 | | PaleTurquoise4 | 102 | 139 | 139 | | PaleVioletRed | 219 | 112 | 147 | | PaleVioletRed1 | 255 | 130 | 171 | | PaleVioletRed2 | 238 | 121 | 159 | | PaleVioletRed3 | 205 | 104 | 127 | | PaleVioletRed4 | 139 | 71 | 93 | | papaya whip | 255 | 239 | 213 | | PapayaWhip | 255 | 239 | 213 | | peach puff | 255 | 218 | 185 | | PeachPuff | 255 | 218 | 185 | | PeachPuff1 | 255 | 218 | 185 | | PeachPuff2 | 238 | 203 | 173 | | PeachPuff3 | 205 | 175 | 149 | | PeachPuff4 | 139 | 119 | 101 | | peru | 205 | 133 | 63 | | pink | 255 | 192 | 203 | | pink1 | 255 | 181 | 197 | | pink2 | 238 | 169 | 184 | | pink3 | 205 | 145 | 158 | | pink4 | 139 | 99 | 108 | | plum | 221 | 160 | 221 | | plum1 | 255 | 187 | 255 | | plum2 | 238 | 174 | 238 | | plum3 | 205 | 150 | 205 | | plum4 | 139 | 102 | 139 | | powder blue | 176 | 224 | 230 | | PowderBlue | 176 | 224 | 230 | | purple | 128 | 0 | 128 | | purple1 | 155 | 48 | 255 | | purple2 | 145 | 44 | 238 | | purple3 | 125 | 38 | 205 | | purple4 | 85 | 26 | 139 | | red | 255 | 0 | 0 | | red1 | 255 | 0 | 0 | | red2 | 238 | 0 | 0 | | red3 | 205 | 0 | 0 | | red4 | 139 | 0 | 0 | | rosy brown | 188 | 143 | 143 | | RosyBrown | 188 | 143 | 143 | | RosyBrown1 | 255 | 193 | 193 | | RosyBrown2 | 238 | 180 | 180 | | RosyBrown3 | 205 | 155 | 155 | | RosyBrown4 | 139 | 105 | 105 | | royal blue | 65 | 105 | 225 | | RoyalBlue | 65 | 105 | 225 | | RoyalBlue1 | 72 | 118 | 255 | | RoyalBlue2 | 67 | 110 | 238 | | RoyalBlue3 | 58 | 95 | 205 | | RoyalBlue4 | 39 | 64 | 139 | | saddle brown | 139 | 69 | 19 | | SaddleBrown | 139 | 69 | 19 | | salmon | 250 | 128 | 114 | | salmon1 | 255 | 140 | 105 | | salmon2 | 238 | 130 | 98 | | salmon3 | 205 | 112 | 84 | | salmon4 | 139 | 76 | 57 | | sandy brown | 244 | 164 | 96 | | SandyBrown | 244 | 164 | 96 | | sea green | 46 | 139 | 87 | | SeaGreen | 46 | 139 | 87 | | SeaGreen1 | 84 | 255 | 159 | | SeaGreen2 | 78 | 238 | 148 | | SeaGreen3 | 67 | 205 | 128 | | SeaGreen4 | 46 | 139 | 87 | | seashell | 255 | 245 | 238 | | seashell1 | 255 | 245 | 238 | | seashell2 | 238 | 229 | 222 | | seashell3 | 205 | 197 | 191 | | seashell4 | 139 | 134 | 130 | | sienna | 160 | 82 | 45 | | sienna1 | 255 | 130 | 71 | | sienna2 | 238 | 121 | 66 | | sienna3 | 205 | 104 | 57 | | sienna4 | 139 | 71 | 38 | | silver | 192 | 192 | 192 | | sky blue | 135 | 206 | 235 | | SkyBlue | 135 | 206 | 235 | | SkyBlue1 | 135 | 206 | 255 | | SkyBlue2 | 126 | 192 | 238 | | SkyBlue3 | 108 | 166 | 205 | | SkyBlue4 | 74 | 112 | 139 | | slate blue | 106 | 90 | 205 | | slate gray | 112 | 128 | 144 | | slate grey | 112 | 128 | 144 | | SlateBlue | 106 | 90 | 205 | | SlateBlue1 | 131 | 111 | 255 | | SlateBlue2 | 122 | 103 | 238 | | SlateBlue3 | 105 | 89 | 205 | | SlateBlue4 | 71 | 60 | 139 | | SlateGray | 112 | 128 | 144 | | SlateGray1 | 198 | 226 | 255 | | SlateGray2 | 185 | 211 | 238 | | SlateGray3 | 159 | 182 | 205 | | SlateGray4 | 108 | 123 | 139 | | SlateGrey | 112 | 128 | 144 | | snow | 255 | 250 | 250 | | snow1 | 255 | 250 | 250 | | snow2 | 238 | 233 | 233 | | snow3 | 205 | 201 | 201 | | snow4 | 139 | 137 | 137 | | spring green | 0 | 255 | 127 | | SpringGreen | 0 | 255 | 127 | | SpringGreen1 | 0 | 255 | 127 | | SpringGreen2 | 0 | 238 | 118 | | SpringGreen3 | 0 | 205 | 102 | | SpringGreen4 | 0 | 139 | 69 | | steel blue | 70 | 130 | 180 | | SteelBlue | 70 | 130 | 180 | | SteelBlue1 | 99 | 184 | 255 | | SteelBlue2 | 92 | 172 | 238 | | SteelBlue3 | 79 | 148 | 205 | | SteelBlue4 | 54 | 100 | 139 | | tan | 210 | 180 | 140 | | tan1 | 255 | 165 | 79 | | tan2 | 238 | 154 | 73 | | tan3 | 205 | 133 | 63 | | tan4 | 139 | 90 | 43 | | teal | 0 | 128 | 128 | | thistle | 216 | 191 | 216 | | thistle1 | 255 | 225 | 255 | | thistle2 | 238 | 210 | 238 | | thistle3 | 205 | 181 | 205 | | thistle4 | 139 | 123 | 139 | | tomato | 255 | 99 | 71 | | tomato1 | 255 | 99 | 71 | | tomato2 | 238 | 92 | 66 | | tomato3 | 205 | 79 | 57 | | tomato4 | 139 | 54 | 38 | | turquoise | 64 | 224 | 208 | | turquoise1 | 0 | 245 | 255 | | turquoise2 | 0 | 229 | 238 | | turquoise3 | 0 | 197 | 205 | | turquoise4 | 0 | 134 | 139 | | violet | 238 | 130 | 238 | | violet red | 208 | 32 | 144 | | VioletRed | 208 | 32 | 144 | | VioletRed1 | 255 | 62 | 150 | | VioletRed2 | 238 | 58 | 140 | | VioletRed3 | 205 | 50 | 120 | | VioletRed4 | 139 | 34 | 82 | | wheat | 245 | 222 | 179 | | wheat1 | 255 | 231 | 186 | | wheat2 | 238 | 216 | 174 | | wheat3 | 205 | 186 | 150 | | wheat4 | 139 | 126 | 102 | | white | 255 | 255 | 255 | | white smoke | 245 | 245 | 245 | | WhiteSmoke | 245 | 245 | 245 | | yellow | 255 | 255 | 0 | | yellow green | 154 | 205 | 50 | | yellow1 | 255 | 255 | 0 | | yellow2 | 238 | 238 | 0 | | yellow3 | 205 | 205 | 0 | | yellow4 | 139 | 139 | 0 | | YellowGreen | 154 | 205 | 50 | Portability issues ------------------ **Mac OS X** On Mac OS X, the following additional system colors are available (note that the actual color values depend on the currently active OS theme, and typically many of these will in fact be patterns rather than pure colors): | | | --- | | systemActiveAreaFill | | systemAlertActiveText | | systemAlertBackgroundActive | | systemAlertBackgroundInactive | | systemAlertInactiveText | | systemAlternatePrimaryHighlightColor | | systemAppleGuideCoachmark | | systemBevelActiveDark | | systemBevelActiveLight | | systemBevelButtonActiveText | | systemBevelButtonInactiveText | | systemBevelButtonPressedText | | systemBevelButtonStickyActiveText | | systemBevelButtonStickyInactiveText | | systemBevelInactiveDark | | systemBevelInactiveLight | | systemBlack | | systemBlackText | | systemButtonActiveDarkHighlight | | systemButtonActiveDarkShadow | | systemButtonActiveLightHighlight | | systemButtonActiveLightShadow | | systemButtonFace | | systemButtonFaceActive | | systemButtonFaceInactive | | systemButtonFacePressed | | systemButtonFrame | | systemButtonFrameActive | | systemButtonFrameInactive | | systemButtonInactiveDarkHighlight | | systemButtonInactiveDarkShadow | | systemButtonInactiveLightHighlight | | systemButtonInactiveLightShadow | | systemButtonPressedDarkHighlight | | systemButtonPressedDarkShadow | | systemButtonPressedLightHighlight | | systemButtonPressedLightShadow | | systemButtonText | | systemChasingArrows | | systemDialogActiveText | | systemDialogBackgroundActive | | systemDialogBackgroundInactive | | systemDialogInactiveText | | systemDocumentWindowBackground | | systemDocumentWindowTitleActiveText | | systemDocumentWindowTitleInactiveText | | systemDragHilite | | systemDrawerBackground | | systemFinderWindowBackground | | systemFocusHighlight | | systemHighlight | | systemHighlightAlternate | | systemHighlightSecondary | | systemHighlightText | | systemIconLabelBackground | | systemIconLabelBackgroundSelected | | systemIconLabelSelectedText | | systemIconLabelText | | systemListViewBackground | | systemListViewColumnDivider | | systemListViewEvenRowBackground | | systemListViewOddRowBackground | | systemListViewSeparator | | systemListViewSortColumnBackground | | systemListViewText | | systemListViewWindowHeaderBackground | | systemMenu | | systemMenuActive | | systemMenuActiveText | | systemMenuBackground | | systemMenuBackgroundSelected | | systemMenuDisabled | | systemMenuItemActiveText | | systemMenuItemDisabledText | | systemMenuItemSelectedText | | systemMenuText | | systemMetalBackground | | systemModelessDialogActiveText | | systemModelessDialogBackgroundActive | | systemModelessDialogBackgroundInactive | | systemModelessDialogInactiveText | | systemMovableModalBackground | | systemMovableModalWindowTitleActiveText | | systemMovableModalWindowTitleInactiveText | | systemNotificationText | | systemNotificationWindowBackground | | systemPlacardActiveText | | systemPlacardBackground | | systemPlacardInactiveText | | systemPlacardPressedText | | systemPopupArrowActive | | systemPopupArrowInactive | | systemPopupArrowPressed | | systemPopupButtonActiveText | | systemPopupButtonInactiveText | | systemPopupButtonPressedText | | systemPopupLabelActiveText | | systemPopupLabelInactiveText | | systemPopupWindowTitleActiveText | | systemPopupWindowTitleInactiveText | | systemPrimaryHighlightColor | | systemPushButtonActiveText | | systemPushButtonInactiveText | | systemPushButtonPressedText | | systemRootMenuActiveText | | systemRootMenuDisabledText | | systemRootMenuSelectedText | | systemScrollBarDelimiterActive | | systemScrollBarDelimiterInactive | | systemSecondaryGroupBoxBackground | | systemSecondaryHighlightColor | | systemSheetBackground | | systemSheetBackgroundOpaque | | systemSheetBackgroundTransparent | | systemStaticAreaFill | | systemSystemDetailText | | systemTabFrontActiveText | | systemTabFrontInactiveText | | systemTabNonFrontActiveText | | systemTabNonFrontInactiveText | | systemTabNonFrontPressedText | | systemTabPaneBackground | | systemToolbarBackground | | systemTransparent | | systemUtilityWindowBackgroundActive | | systemUtilityWindowBackgroundInactive | | systemUtilityWindowTitleActiveText | | systemUtilityWindowTitleInactiveText | | systemWhite | | systemWhiteText | | systemWindowBody | | systemWindowHeaderActiveText | | systemWindowHeaderBackground | | systemWindowHeaderInactiveText | **Windows** On Windows, the following additional system colors are available (note that the actual color values depend on the currently active OS theme): | | | | --- | --- | | system3dDarkShadow | systemHighlight | | system3dLight | systemHighlightText | | systemActiveBorder | systemInactiveBorder | | systemActiveCaption | systemInactiveCaption | | systemAppWorkspace | systemInactiveCaptionText | | systemBackground | systemInfoBackground | | systemButtonFace | systemInfoText | | systemButtonHighlight | systemMenu | | systemButtonShadow | systemMenuText | | systemButtonText | systemScrollbar | | systemCaptionText | systemWindow | | systemDisabledText | systemWindowFrame | | systemGrayText | systemWindowText | See also -------- **[options](options.htm)**, **[Tk\_GetColor](https://www.tcl.tk/man/tcl/TkLib/GetColor.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/colors.htm>
programming_docs
tcl_tk fontchooser fontchooser =========== [NAME](fontchooser.htm#M2) fontchooser — control font selection dialog [SYNOPSIS](fontchooser.htm#M3) [DESCRIPTION](fontchooser.htm#M4) [**tk fontchooser** **configure** ?*-option value -option value ...*?](fontchooser.htm#M5) [**tk fontchooser** **show**](fontchooser.htm#M6) [**tk fontchooser** **hide**](fontchooser.htm#M7) [CONFIGURATION OPTIONS](fontchooser.htm#M8) [**-parent**](fontchooser.htm#M9) [**-title**](fontchooser.htm#M10) [**-font**](fontchooser.htm#M11) [**-command**](fontchooser.htm#M12) [**-visible**](fontchooser.htm#M13) [VIRTUAL EVENTS](fontchooser.htm#M14) [**<<TkFontchooserVisibility>>**](fontchooser.htm#M15) [**<<TkFontchooserFontChanged>>**](fontchooser.htm#M16) [NOTES](fontchooser.htm#M17) [EXAMPLE](fontchooser.htm#M18) [SEE ALSO](fontchooser.htm#M19) [KEYWORDS](fontchooser.htm#M20) Name ---- fontchooser — control font selection dialog Synopsis -------- **[tk fontchooser](tk.htm)** **configure** ?*-option value -option value ...*? **[tk fontchooser](tk.htm)** **show** **[tk fontchooser](tk.htm)** **hide** Description ----------- The **[tk fontchooser](tk.htm)** command controls the Tk font selection dialog. It uses the native platform font selection dialog where available, or a dialog implemented in Tcl otherwise. Unlike most of the other Tk dialog commands, **[tk fontchooser](tk.htm)** does not return an immediate result, as on some platforms (Mac OS X) the standard font dialog is modeless while on others (Windows) it is modal. To accommodate this difference, all user interaction with the dialog will be communicated to the caller via callbacks or virtual events. The **[tk fontchooser](tk.htm)** command can have one of the following forms: **tk fontchooser** **configure** ?*-option value -option value ...*? Set or query one or more of the configurations options below (analogous to Tk widget configuration). **tk fontchooser** **show** Show the font selection dialog. Depending on the platform, may return immediately or only once the dialog has been withdrawn. **tk fontchooser** **hide** Hide the font selection dialog if it is visible and cause any pending **[tk fontchooser](tk.htm)** **show** command to return. Configuration options --------------------- **-parent** Specifies/returns the logical parent window of the font selection dialog (similar to the **-parent** option to other dialogs). The font selection dialog is hidden if it is visible when the parent window is destroyed. **-title** Specifies/returns the title of the dialog. Has no effect on platforms where the font selection dialog does not support titles. **-font** Specifies/returns the font that is currently selected in the dialog if it is visible, or that will be initially selected when the dialog is shown (if supported by the platform). Can be set to the empty string to indicate that no font should be selected. Fonts can be specified in any form given by the "FONT DESCRIPTION" section in the **[font](font.htm)** manual page. **-command** Specifies/returns the command prefix to be called when a font selection has been made by the user. The command prefix is evaluated at the global level after having the specification of the selected font appended. On platforms where the font selection dialog offers the user control of further font attributes (such as color), additional key/value pairs may be appended before evaluation. Can be set to the empty string to indicate that no callback should be invoked. Fonts are specified by a list of form [3] of the "FONT DESCRIPTION" section in the **[font](font.htm)** manual page (i.e. a list of the form *{family size style ?style ...?}*). **-visible** Read-only option that returns a boolean indicating whether the font selection dialog is currently visible. Attempting to set this option results in an error. Virtual events -------------- **<<TkFontchooserVisibility>>** Sent to the dialog parent whenever the visibility of the font selection dialog changes, both as a result of user action (e.g. disposing of the dialog via OK/Cancel button or close box) and of the **[tk fontchooser](tk.htm)** **show**/**hide** commands being called. Binding scripts can determine the current visibility of the dialog by querying the **-visible** configuration option. **<<TkFontchooserFontChanged>>** Sent to the dialog parent whenever the font selection dialog is visible and the selected font changes, both as a result of user action and of the **-font** configuration option being set. Binding scripts can determine the currently selected font by querying the **-font** configuration option. Notes ----- Callers should not expect a result from **[tk fontchooser](tk.htm)** **show** and may not assume that the dialog has been withdrawn or closed when the command returns. All user interaction with the dialog is communicated to the caller via the **-command** callback and the **<<TkFontchooser\*>>** virtual events. It is implementation dependent which exact user actions result in the callback being called resp. the virtual events being sent. Where an Apply or OK button is present in the dialog, that button will trigger the **-command** callback and **<<TkFontchooserFontChanged>>** virtual event. On some implementations other user actions may also have that effect; on Mac OS X for instance, the standard font selection dialog immediately reflects all user choices to the caller. In the presence of multiple widgets intended to be influenced by the font selection dialog, care needs to be taken to correctly handle focus changes: the font selected in the dialog should always match the current font of the widget with the focus, and the **-command** callback should only act on the widget with the focus. The recommended practice is to set font dialog **-font** and **-command** configuration options in per-widget **<FocusIn>** handlers (and if necessary to unset them - i.e. set to the empty string - in corresponding **<FocusOut>** handlers). This is particularly important for implementers of library code using the font selection dialog, to avoid conflicting with application code that may also want to use the dialog. Because the font selection dialog is application-global, in the presence of multiple interpreters calling **[tk fontchooser](tk.htm)**, only the **-command** callback set by the interpreter that most recently called **[tk fontchooser](tk.htm)** **configure** or **[tk fontchooser](tk.htm)** **show** will be invoked in response to user action and only the **-parent** set by that interpreter will receive **<<TkFontchooser\*>>** virtual events. The font dialog implementation may only store (and return) **[font](font.htm)** **actual** data as the value of the **-font** configuration option. This can be an issue when **-font** is set to a named font, if that font is subsequently changed, the font dialog **-font** option needs to be set again to ensure its selected font matches the new value of the named font. Example ------- ``` proc fontchooserDemo {} { wm title . "Font Chooser Demo" **[tk fontchooser](tk.htm)** **configure** -parent . button .b -command fontchooserToggle -takefocus 0 fontchooserVisibility .b bind . **<<TkFontchooserVisibility>>** \ [list fontchooserVisibility .b] foreach w {.t1 .t2} { text $w -width 20 -height 4 -borderwidth 1 -relief solid bind $w <FocusIn> [list fontchooserFocus $w] $w insert end "Text Widget $w" } .t1 configure -font {Courier 14} .t2 configure -font {Times 16} pack .b .t1 .t2; focus .t1 } proc fontchooserToggle {} { **[tk fontchooser](tk.htm)** [expr { [**[tk fontchooser](tk.htm)** **configure** -visible] ? "**hide**" : "**show**"}] } proc fontchooserVisibility {w} { $w configure -text [expr { [**[tk fontchooser](tk.htm)** **configure** -visible] ? "Hide Font Dialog" : "Show Font Dialog"}] } proc fontchooserFocus {w} { **[tk fontchooser](tk.htm)** **configure** -font [$w cget -font] \ -command [list fontchooserFontSelection $w] } proc fontchooserFontSelection {w font args} { $w configure -font [font actual $font] } fontchooserDemo ``` See also -------- **[font](font.htm)**, **[tk](tk.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/fontchooser.htm> tcl_tk grab grab ==== [NAME](grab.htm#M2) grab — Confine pointer and keyboard events to a window sub-tree [SYNOPSIS](grab.htm#M3) [DESCRIPTION](grab.htm#M4) [**grab** ?**-global**? *window*](grab.htm#M5) [**grab current** ?*window*?](grab.htm#M6) [**grab release** *window*](grab.htm#M7) [**grab set** ?**-global**? *window*](grab.htm#M8) [**grab status** *window*](grab.htm#M9) [WARNING](grab.htm#M10) [BUGS](grab.htm#M11) [EXAMPLE](grab.htm#M12) [SEE ALSO](grab.htm#M13) [KEYWORDS](grab.htm#M14) Name ---- grab — Confine pointer and keyboard events to a window sub-tree Synopsis -------- **grab** ?**-global**? *window* **grab** *option* ?*arg arg* ...? Description ----------- This command implements simple pointer and keyboard grabs for Tk. Tk's grabs are different than the grabs described in the Xlib documentation. When a grab is set for a particular window, Tk restricts all pointer events to the grab window and its descendants in Tk's window hierarchy. Whenever the pointer is within the grab window's subtree, the pointer will behave exactly the same as if there had been no grab at all and all events will be reported in the normal fashion. When the pointer is outside *window*'s tree, button presses and releases and mouse motion events are reported to *window*, and window entry and window exit events are ignored. The grab subtree “owns” the pointer: windows outside the grab subtree will be visible on the screen but they will be insensitive until the grab is released. The tree of windows underneath the grab window can include top-level windows, in which case all of those top-level windows and their descendants will continue to receive mouse events during the grab. Two forms of grabs are possible: local and global. A local grab affects only the grabbing application: events will be reported to other applications as if the grab had never occurred. Grabs are local by default. A global grab locks out all applications on the screen, so that only the given subtree of the grabbing application will be sensitive to pointer events (mouse button presses, mouse button releases, pointer motions, window entries, and window exits). During global grabs the window manager will not receive pointer events either. During local grabs, keyboard events (key presses and key releases) are delivered as usual: the window manager controls which application receives keyboard events, and if they are sent to any window in the grabbing application then they are redirected to the focus window. During a global grab Tk grabs the keyboard so that all keyboard events are always sent to the grabbing application. The **[focus](focus.htm)** command is still used to determine which window in the application receives the keyboard events. The keyboard grab is released when the grab is released. Grabs apply to particular displays. If an application has windows on multiple displays then it can establish a separate grab on each display. The grab on a particular display affects only the windows on that display. It is possible for different applications on a single display to have simultaneous local grabs, but only one application can have a global grab on a given display at once. The **grab** command can take any of the following forms: **grab** ?**-global**? *window* Same as **grab set**, described below. **grab current** ?*window*? If *window* is specified, returns the name of the current grab window in this application for *window*'s display, or an empty string if there is no such window. If *window* is omitted, the command returns a list whose elements are all of the windows grabbed by this application for all displays, or an empty string if the application has no grabs. **grab release** *window* Releases the grab on *window* if there is one, otherwise does nothing. Returns an empty string. **grab set** ?**-global**? *window* Sets a grab on *window*. If **-global** is specified then the grab is global, otherwise it is local. If a grab was already in effect for this application on *window*'s display then it is automatically released. If there is already a grab on *window* and it has the same global/local form as the requested grab, then the command does nothing. Returns an empty string. **grab status** *window* Returns **none** if no grab is currently set on *window*, **local** if a local grab is set on *window*, and **global** if a global grab is set. Warning ------- It is very easy to use global grabs to render a display completely unusable (e.g. by setting a grab on a widget which does not respond to events and not providing any mechanism for releasing the grab). Take *extreme* care when using them! Bugs ---- It took an incredibly complex and gross implementation to produce the simple grab effect described above. Given the current implementation, it is not safe for applications to use the Xlib grab facilities at all except through the Tk grab procedures. If applications try to manipulate X's grab mechanisms directly, things will probably break. If a single process is managing several different Tk applications, only one of those applications can have a local grab for a given display at any given time. If the applications are in different processes, this restriction does not exist. Example ------- Set a grab so that only one button may be clicked out of a group. The other buttons are unresponsive to the mouse until the middle button is clicked. ``` pack [button .b1 -text "Click me! #1" -command {destroy .b1}] pack [button .b2 -text "Click me! #2" -command {destroy .b2}] pack [button .b3 -text "Click me! #3" -command {destroy .b3}] **grab** .b2 ``` See also -------- **[busy](busy.htm)** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/grab.htm> tcl_tk scrollbar scrollbar ========= [NAME](scrollbar.htm#M2) scrollbar — Create and manipulate 'scrollbar' scrolling control and indicator widgets [SYNOPSIS](scrollbar.htm#M3) [STANDARD OPTIONS](scrollbar.htm#M4) [-activebackground, activeBackground, Foreground](options.htm#M-activebackground) [-background or -bg, background, Background](options.htm#M-background) [-borderwidth or -bd, borderWidth, BorderWidth](options.htm#M-borderwidth) [-cursor, cursor, Cursor](options.htm#M-cursor) [-highlightbackground, highlightBackground, HighlightBackground](options.htm#M-highlightbackground) [-highlightcolor, highlightColor, HighlightColor](options.htm#M-highlightcolor) [-highlightthickness, highlightThickness, HighlightThickness](options.htm#M-highlightthickness) [-jump, jump, Jump](options.htm#M-jump) [-orient, orient, Orient](options.htm#M-orient) [-relief, relief, Relief](options.htm#M-relief) [-repeatdelay, repeatDelay, RepeatDelay](options.htm#M-repeatdelay) [-repeatinterval, repeatInterval, RepeatInterval](options.htm#M-repeatinterval) [-takefocus, takeFocus, TakeFocus](options.htm#M-takefocus) [-troughcolor, troughColor, Background](options.htm#M-troughcolor) [WIDGET-SPECIFIC OPTIONS](scrollbar.htm#M5) [-activerelief, activeRelief, ActiveRelief](scrollbar.htm#M6) [-command, command, Command](scrollbar.htm#M7) [-elementborderwidth, elementBorderWidth, BorderWidth](scrollbar.htm#M8) [-width, width, Width](scrollbar.htm#M9) [DESCRIPTION](scrollbar.htm#M10) [ELEMENTS](scrollbar.htm#M11) [**arrow1**](scrollbar.htm#M12) [**trough1**](scrollbar.htm#M13) [**slider**](scrollbar.htm#M14) [**trough2**](scrollbar.htm#M15) [**arrow2**](scrollbar.htm#M16) [WIDGET COMMAND](scrollbar.htm#M17) [*pathName* **activate** ?*element*?](scrollbar.htm#M18) [*pathName* **cget** *option*](scrollbar.htm#M19) [*pathName* **configure** ?*option*? ?*value option value ...*?](scrollbar.htm#M20) [*pathName* **delta** *deltaX deltaY*](scrollbar.htm#M21) [*pathName* **fraction** *x y*](scrollbar.htm#M22) [*pathName* **get**](scrollbar.htm#M23) [*pathName* **identify** *x y*](scrollbar.htm#M24) [*pathName* **set** *first last*](scrollbar.htm#M25) [SCROLLING COMMANDS](scrollbar.htm#M26) [*prefix* **moveto** *fraction*](scrollbar.htm#M27) [*prefix* **scroll** *number* **units**](scrollbar.htm#M28) [*prefix* **scroll** *number* **pages**](scrollbar.htm#M29) [OLD COMMAND SYNTAX](scrollbar.htm#M30) [*pathName* **set** *totalUnits windowUnits firstUnit lastUnit*](scrollbar.htm#M31) [*prefix* *unit*](scrollbar.htm#M32) [BINDINGS](scrollbar.htm#M33) [EXAMPLE](scrollbar.htm#M34) [SEE ALSO](scrollbar.htm#M35) [KEYWORDS](scrollbar.htm#M36) Name ---- scrollbar — Create and manipulate 'scrollbar' scrolling control and indicator widgets Synopsis -------- **scrollbar** *pathName* ?*options*? Standard options ---------------- **[-activebackground, activeBackground, Foreground](options.htm#M-activebackground)** **[-background or -bg, background, Background](options.htm#M-background)** **[-borderwidth or -bd, borderWidth, BorderWidth](options.htm#M-borderwidth)** **[-cursor, cursor, Cursor](options.htm#M-cursor)** **[-highlightbackground, highlightBackground, HighlightBackground](options.htm#M-highlightbackground)** **[-highlightcolor, highlightColor, HighlightColor](options.htm#M-highlightcolor)** **[-highlightthickness, highlightThickness, HighlightThickness](options.htm#M-highlightthickness)** **[-jump, jump, Jump](options.htm#M-jump)** **[-orient, orient, Orient](options.htm#M-orient)** **[-relief, relief, Relief](options.htm#M-relief)** **[-repeatdelay, repeatDelay, RepeatDelay](options.htm#M-repeatdelay)** **[-repeatinterval, repeatInterval, RepeatInterval](options.htm#M-repeatinterval)** **[-takefocus, takeFocus, TakeFocus](options.htm#M-takefocus)** **[-troughcolor, troughColor, Background](options.htm#M-troughcolor)** Widget-specific options ----------------------- Command-Line Name: **-activerelief** Database Name: **activeRelief** Database Class: **ActiveRelief** Specifies the relief to use when displaying the element that is active, if any. Elements other than the active element are always displayed with a raised relief. Command-Line Name: **-command** Database Name: **command** Database Class: **Command** Specifies the prefix of a Tcl command to invoke to change the view in the widget associated with the scrollbar. When a user requests a view change by manipulating the scrollbar, a Tcl command is invoked. The actual command consists of this option followed by additional information as described later. This option almost always has a value such as **.t xview** or **.t yview**, consisting of the name of a widget and either **xview** (if the scrollbar is for horizontal scrolling) or **yview** (for vertical scrolling). All scrollable widgets have **xview** and **yview** commands that take exactly the additional arguments appended by the scrollbar as described in **[SCROLLING COMMANDS](#M26)** below. Command-Line Name: **-elementborderwidth** Database Name: **elementBorderWidth** Database Class: **BorderWidth** Specifies the width of borders drawn around the internal elements of the scrollbar (the two arrows and the slider). The value may have any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**. If this value is less than zero, the value of the **-borderwidth** option is used in its place. Command-Line Name: **-width** Database Name: **width** Database Class: **Width** Specifies the desired narrow dimension of the scrollbar window, not including 3-D border, if any. For vertical scrollbars this will be the width and for horizontal scrollbars this will be the height. The value may have any of the forms acceptable to **[Tk\_GetPixels](https://www.tcl.tk/man/tcl/TkLib/GetPixels.htm)**. Description ----------- The **scrollbar** command creates a new window (given by the *pathName* argument) and makes it into a scrollbar widget. Additional options, described above, may be specified on the command line or in the option database to configure aspects of the scrollbar such as its colors, orientation, and relief. The **scrollbar** command returns its *pathName* argument. At the time this command is invoked, there must not exist a window named *pathName*, but *pathName*'s parent must exist. A scrollbar is a widget that displays two arrows, one at each end of the scrollbar, and a *slider* in the middle portion of the scrollbar. It provides information about what is visible in an *associated window* that displays a document of some sort (such as a file being edited or a drawing). The position and size of the slider indicate which portion of the document is visible in the associated window. For example, if the slider in a vertical scrollbar covers the top third of the area between the two arrows, it means that the associated window displays the top third of its document. Scrollbars can be used to adjust the view in the associated window by clicking or dragging with the mouse. See the **[BINDINGS](#M33)** section below for details. Elements -------- A scrollbar displays five elements, which are referred to in the widget commands for the scrollbar: **arrow1** The top or left arrow in the scrollbar. **trough1** The region between the slider and **arrow1**. **slider** The rectangle that indicates what is visible in the associated widget. **trough2** The region between the slider and **arrow2**. **arrow2** The bottom or right arrow in the scrollbar. Widget command -------------- The **scrollbar** command creates a new Tcl command whose name is *pathName*. This command may be used to invoke various operations on the widget. It has the following general form: ``` *pathName option* ?*arg arg ...*? ``` *Option* and the *arg*s determine the exact behavior of the command. The following commands are possible for scrollbar widgets: *pathName* **activate** ?*element*? Marks the element indicated by *element* as active, which causes it to be displayed as specified by the **-activebackground** and **-activerelief** options. The only element values understood by this command are **arrow1**, **slider**, or **arrow2**. If any other value is specified then no element of the scrollbar will be active. If *element* is not specified, the command returns the name of the element that is currently active, or an empty string if no element is active. *pathName* **cget** *option* Returns the current value of the configuration option given by *option*. *Option* may have any of the values accepted by the **scrollbar** command. *pathName* **configure** ?*option*? ?*value option value ...*? Query or modify the configuration options of the widget. If no *option* is specified, returns a list describing all of the available options for *pathName* (see **[Tk\_ConfigureInfo](https://www.tcl.tk/man/tcl/TkLib/ConfigWidg.htm)** for information on the format of this list). If *option* is specified with no *value*, then the command returns a list describing the one named option (this list will be identical to the corresponding sublist of the value returned if no *option* is specified). If one or more *option-value* pairs are specified, then the command modifies the given widget option(s) to have the given value(s); in this case the command returns an empty string. *Option* may have any of the values accepted by the **scrollbar** command. *pathName* **delta** *deltaX deltaY* Returns a real number indicating the fractional change in the scrollbar setting that corresponds to a given change in slider position. For example, if the scrollbar is horizontal, the result indicates how much the scrollbar setting must change to move the slider *deltaX* pixels to the right (*deltaY* is ignored in this case). If the scrollbar is vertical, the result indicates how much the scrollbar setting must change to move the slider *deltaY* pixels down. The arguments and the result may be zero or negative. *pathName* **fraction** *x y* Returns a real number between 0 and 1 indicating where the point given by *x* and *y* lies in the trough area of the scrollbar. The value 0 corresponds to the top or left of the trough, the value 1 corresponds to the bottom or right, 0.5 corresponds to the middle, and so on. *X* and *y* must be pixel coordinates relative to the scrollbar widget. If *x* and *y* refer to a point outside the trough, the closest point in the trough is used. *pathName* **get** Returns the scrollbar settings in the form of a list whose elements are the arguments to the most recent **set** widget command. *pathName* **identify** *x y* Returns the name of the element under the point given by *x* and *y* (such as **arrow1**), or an empty string if the point does not lie in any element of the scrollbar. *X* and *y* must be pixel coordinates relative to the scrollbar widget. *pathName* **set** *first last* This command is invoked by the scrollbar's associated widget to tell the scrollbar about the current view in the widget. The command takes two arguments, each of which is a real fraction between 0 and 1. The fractions describe the range of the document that is visible in the associated widget. For example, if *first* is 0.2 and *last* is 0.4, it means that the first part of the document visible in the window is 20% of the way through the document, and the last visible part is 40% of the way through. Scrolling commands ------------------ When the user interacts with the scrollbar, for example by dragging the slider, the scrollbar notifies the associated widget that it must change its view. The scrollbar makes the notification by evaluating a Tcl command generated from the scrollbar's **-command** option. The command may take any of the following forms. In each case, *prefix* is the contents of the **-command** option, which usually has a form like “**.t”yview** *prefix* **moveto** *fraction* *Fraction* is a real number between 0 and 1. The widget should adjust its view so that the point given by *fraction* appears at the beginning of the widget. If *fraction* is 0 it refers to the beginning of the document. 1.0 refers to the end of the document, 0.333 refers to a point one-third of the way through the document, and so on. *prefix* **scroll** *number* **units** The widget should adjust its view by *number* units. The units are defined in whatever way makes sense for the widget, such as characters or lines in a text widget. *Number* is either 1, which means one unit should scroll off the top or left of the window, or -1, which means that one unit should scroll off the bottom or right of the window. *prefix* **scroll** *number* **pages** The widget should adjust its view by *number* pages. It is up to the widget to define the meaning of a page; typically it is slightly less than what fits in the window, so that there is a slight overlap between the old and new views. *Number* is either 1, which means the next page should become visible, or -1, which means that the previous page should become visible. Old command syntax ------------------ In versions of Tk before 4.0, the **set** and **get** widget commands used a different form. This form is still supported for backward compatibility, but it is deprecated. In the old command syntax, the **set** widget command has the following form: *pathName* **set** *totalUnits windowUnits firstUnit lastUnit* In this form the arguments are all integers. *TotalUnits* gives the total size of the object being displayed in the associated widget. The meaning of one unit depends on the associated widget; for example, in a text editor widget units might correspond to lines of text. *WindowUnits* indicates the total number of units that can fit in the associated window at one time. *FirstUnit* and *lastUnit* give the indices of the first and last units currently visible in the associated window (zero corresponds to the first unit of the object). Under the old syntax the **get** widget command returns a list of four integers, consisting of the *totalUnits*, *windowUnits*, *firstUnit*, and *lastUnit* values from the last **set** widget command. The commands generated by scrollbars also have a different form when the old syntax is being used: *prefix* *unit* *Unit* is an integer that indicates what should appear at the top or left of the associated widget's window. It has the same meaning as the *firstUnit* and *lastUnit* arguments to the **set** widget command. The most recent **set** widget command determines whether or not to use the old syntax. If it is given two real arguments then the new syntax will be used in the future, and if it is given four integer arguments then the old syntax will be used. Bindings -------- Tk automatically creates class bindings for scrollbars that give them the following default behavior. If the behavior is different for vertical and horizontal scrollbars, the horizontal behavior is described in parentheses. 1. Pressing button 1 over **arrow1** causes the view in the associated widget to shift up (left) by one unit so that the document appears to move down (right) one unit. If the button is held down, the action auto-repeats. 2. Pressing button 1 over **trough1** causes the view in the associated widget to shift up (left) by one screenful so that the document appears to move down (right) one screenful. If the button is held down, the action auto-repeats. 3. Pressing button 1 over the slider and dragging causes the view to drag with the slider. If the **jump** option is true, then the view does not drag along with the slider; it changes only when the mouse button is released. 4. Pressing button 1 over **trough2** causes the view in the associated widget to shift down (right) by one screenful so that the document appears to move up (left) one screenful. If the button is held down, the action auto-repeats. 5. Pressing button 1 over **arrow2** causes the view in the associated widget to shift down (right) by one unit so that the document appears to move up (left) one unit. If the button is held down, the action auto-repeats. 6. If button 2 is pressed over the trough or the slider, it sets the view to correspond to the mouse position; dragging the mouse with button 2 down causes the view to drag with the mouse. If button 2 is pressed over one of the arrows, it causes the same behavior as pressing button 1. 7. If button 1 is pressed with the Control key down, then if the mouse is over **arrow1** or **trough1** the view changes to the very top (left) of the document; if the mouse is over **arrow2** or **trough2** the view changes to the very bottom (right) of the document; if the mouse is anywhere else then the button press has no effect. 8. In vertical scrollbars the Up and Down keys have the same behavior as mouse clicks over **arrow1** and **arrow2**, respectively. In horizontal scrollbars these keys have no effect. 9. In vertical scrollbars Control-Up and Control-Down have the same behavior as mouse clicks over **trough1** and **trough2**, respectively. In horizontal scrollbars these keys have no effect. 10. In horizontal scrollbars the Up and Down keys have the same behavior as mouse clicks over **arrow1** and **arrow2**, respectively. In vertical scrollbars these keys have no effect. 11. In horizontal scrollbars Control-Up and Control-Down have the same behavior as mouse clicks over **trough1** and **trough2**, respectively. In vertical scrollbars these keys have no effect. 12. The Prior and Next keys have the same behavior as mouse clicks over **trough1** and **trough2**, respectively. 13. The Home key adjusts the view to the top (left edge) of the document. 14. The End key adjusts the view to the bottom (right edge) of the document. Example ------- Create a window with a scrollable **[text](text.htm)** widget: ``` toplevel .tl text .tl.t -yscrollcommand {.tl.s set} **scrollbar** .tl.s -command {.tl.t yview} grid .tl.t .tl.s -sticky nsew grid columnconfigure .tl 0 -weight 1 grid rowconfigure .tl 0 -weight 1 ``` See also -------- **ttk:scrollbar** Licensed under [Tcl/Tk terms](http://tcl.tk/software/tcltk/license.html) <https://www.tcl.tk/man/tcl/TkCmd/scrollbar.htm>
programming_docs